[Editor's Note] As Turing Award winner and chief AI scientist of Meta, Yann LeCun is one of the most powerful defenders of AI technology. When his former collaborators Geoffrey Hinton and Yoshua Bengio declared AI extinction, LeCun did not participate. Instead, he signed an open letter calling for the embrace of open source AI and saying that AI should not be controlled by a few companies . So, will AI bring devastating disasters to mankind, or will it accelerate the development of human society? These are the two opposing voices that have long existed in the industry since ChatGPT came out more than a year ago. Recently, LeCun accepted an interview with the American digital media WIRED and answered the above questions. The core points are as follows: AI will democratize creativity to a certain extent, they can write very fluent texts that have a really great style, but they are also boring because what they come up with may be completely fake. In the long run, all future interactions between humans and the digital world, and to some extent, between humans, will be mediated by AI systems. AI must be open source because when platforms become an essential part of the communication structure, we need a common infrastructure. Yann LeCun believes that he does not belong to any school of thought such as "accelerationism" or "extinctionism", and he does not like these labels. Yann LeCun does not endorse AGI because there is no such thing as general intelligence. Intelligence is not a linear thing that can be measured, and different types of intelligent entities have different skills. Setting goals in AI systems is the only way to ensure their controllability and safety, and this is called goal-driven AI, which is a completely new architecture and we have not seen any examples yet. Yann LeCun believes that the research community no longer cares about OpenAI because they don't publish papers or reveal what they are doing. Some of my former colleagues and students work at OpenAI, and we feel sad for them because there is instability at OpenAI. Academic Headlines has carefully compiled the article without changing the main idea of the original text. The content is as follows: Don’t go doom-mongering on Yann LeCun. A pioneer of modern AI and chief AI scientist at Meta, LeCun is one of the technology’s most vocal defenders. He scoffs at misinformation and even utopian scenarios painted by his peers that lead to human extinction . He frequently tweets on X to berate those who spread fear. When his former collaborators Geoffrey Hinton and Yoshua Bengio put their names at the head of a statement calling AI a “society-level risk,” LeCun didn’t participate. Instead, he signed an open letter to U.S. President Joe Biden calling for an embrace of open source AI and saying it shouldn’t be controlled by a few corporations. LeCun’s opinion is important. Along with Hinton and Bengio, he helped create the deep learning methods that have been crucial to advancing AI, earning the trio the Turing Award, the highest honor in computing. In 2013, Meta (then Facebook) hired him as the founding director of FAIR. He’s also a professor at New York University. More recently, he helped convince CEO Mark Zuckerberg to share some of Meta’s AI technology with the world. This summer, Meta launched an open-source large language model (LLM) called “Llama 2” to compete with OpenAI, Microsoft, and Google. Some critics warn that this open-source strategy could allow bad actors to circumvent protections against harmful content in the output of LLMs by modifying the code. LeCun, one of the most prominent figures in AI, believes humans can solve this problem. This fall, I spoke with LeCun in a conference room at Meta’s Midtown office in New York. We talked about open source, why he thinks the dangers of AI are overblown, and whether computers can move people the way Charlie Parker’s saxophone solos do . (LeCun grew up outside Paris and frequented New York’s jazz clubs.) We talked again in December, when LeCun was attending the NeurIPS conference. This interview has been edited for length and clarity. Steven Levy: In a recent talk, you said “machine learning sucks.” Why would an AI pioneer like you say that? Yann LeCun: Machine learning is great. But what about the idea that we can achieve human-level AI by simply scaling up existing technologies? We are missing something important for machines to learn as efficiently as humans and animals, but we don’t know what it is yet. I don't want to bash these systems or say they're useless, I've been focusing on them throughout my career, but we have to curb the excitement of some people who think we just need to scale up and we'll quickly achieve human-level intelligence. That's absolutely not the case. You feel it is your responsibility to expose these things. That’s right. AI will bring many benefits to the world, but some people are abusing it by instilling fear in people about the technology. We have to be careful not to put people off. This is the same mistake we’ve made with other technologies that have revolutionized the world. Take the invention of the printing press in the 15th century. The Catholic Church hated it, right? People could read the Bible on their own without having to ask a priest. Almost everyone in power opposed widespread use of the printing press because it would change the power structure. They were right, and it caused 200 years of religious conflict. But it also led to the Enlightenment. [Note: Historians may point out that the church actually used the printing press for its own purposes, but that’s how LeCun thinks of it, anyway.] Why are so many of the biggest names in tech sounding the alarm about AI? Some people are seeking attention, and some people are not seeing the reality of the situation today. They don't realize that AI can actually reduce hate speech and misinformation . At Meta, we've made huge strides in this area using AI. Five years ago, of all the hate speech we removed from the platform, about 20% to 25% was removed by AI systems before anyone saw it. Last year, that was 95%. What do you think of chatbots? Are they powerful enough to replace human jobs? They're awesome, and people are making huge strides in doing that. They're going to democratize creativity to a certain extent , and they can write really fluent text that has really great style, but they're also boring because what they come up with might be completely fake. Meta seems intent on developing these technologies and implementing them into products. In the long term, all of our interactions with the digital world, and to some extent, with each other, will be mediated by AI systems. We have to try things that are not powerful enough to do that now, but will be, or help humans create things in their daily lives, whether it's text or real-time translation, things like that, and possibly in the metaverse. At Meta, how is Mark advancing his work in AI? Mark is very engaged. I had a discussion with him earlier this year and told him what I just told you, that all of our interactions in the future will be mediated by AI. ChatGPT showed us what AI could do for new products sooner than we thought. We saw that the public was far more fascinated by the capabilities of AI than we thought. So Mark decided to create a product division focused on generative AI. Why did Meta decide to share the Llama code with others in an open source format? When you have an open platform that many people can contribute to, progress happens much faster. The systems that are developed are more secure and perform better. Imagine a future where all of our interactions with the digital world are mediated by AI systems. You don't want AI systems to be controlled by a few companies on the West Coast of the United States. Maybe Americans don't care, maybe the US government doesn't care. But I'm telling you right now, in Europe, they're not going to like it. They're going to say, "Okay, this speaks correct English. But what about French? What about German? What about Hungarian? What about Dutch or other languages? How did you train it? How does this reflect our culture?" This seems like a great way to get startups to use your product and beat out your competitors. We don’t need to compromise with anyone, this is where the world is going. AI must be open source because when platforms become an essential part of the communication structure, we need a common infrastructure. One company that disagrees with this statement is OpenAI, and you don't seem to like it. When they started, they envisioned creating a nonprofit that would do AI research to compete with companies like Google and Meta that were dominating industry research. I thought that was a wrong idea, and it turns out I was right. OpenAI is no longer open. Meta has always been open, and it still is. The second thing I would say is that it's hard to do substantive AI research unless you have a way to fund it. Eventually, they had to form a for-profit organization and get investment from Microsoft. So, although OpenAI has some independence, they are now basically a cooperative research organization of Microsoft. The third point is that they believed that general artificial intelligence (AGI) was just around the corner and they would develop it before anyone else, but they couldn't do it. What do you think of the drama at OpenAI with Sam Altman being kicked out as CEO and then back on a different board? Do you think it has any implications for the research community or industry? I think the research community doesn't care much about OpenAI anymore because they don't publish papers and don't disclose what they're working on. Some of my former colleagues and students work at OpenAI, and we feel sad for them because there's instability. Research needs stability to thrive, and when there's drama like this, people become hesitant. Also, another important aspect for people doing research is openness, and OpenAI is really not open anymore. So in that sense, OpenAI has changed, and they're no longer seen as a contributor to the research community. It's all in the hands of the open platform. This event has been called a victory for AI “accelerationism,” which is the exact opposite of “extinctionism.” I know you’re not an “extinctionist,” but are you an “accelerationist”? No, I don't like those labels. I don't belong to any one school of thought. I'm very careful not to take that kind of thinking to the extreme because it's too easy to go full circle and do stupid things. The European Union recently released a set of AI regulations, one of which largely exempts open source models. How will this affect Meta and other companies? This affects Meta to some extent, but we are strong enough to comply with any regulations. This is much more important for countries that don't have their own resources to build AI systems from scratch. They can rely on open source platforms and have AI systems that match their culture, language, and interests. In the not too distant future, the vast majority of our interactions with the digital world will be mediated by AI systems . You don't want these things to be controlled by a few companies in California. Were you involved in helping regulators reach this conclusion? I'm talking to regulators, but I'm not talking to them directly. I've been talking to governments, especially the French government, but also indirectly to other governments. Basically, they don't want the digital consumption of citizens to be controlled by a few people, and the French government accepted this idea very early. Unfortunately, I haven't talked to people at the EU level, who are more influenced by doomsday predictions and want to regulate everything to prevent the disasters they think are possible. However, this is opposed by the French, German and Italian governments, who believe that the EU must make special provisions for open source platforms. But is open source AI really difficult to control and regulate? For products where safety is very important, there are already regulations in place. For example, if you're using AI to design a new drug, there are already regulations in place to make sure that the product is safe. I think that makes sense. The debate is about whether it makes sense to regulate the development of AI. I don't think it makes sense. Couldn’t someone take over the world by exploiting complex open source systems released by large companies? With access to source code and metadata, a terrorist or a fraudster could give an AI system destructive capabilities. They need 2,000 GPUs somewhere secretive, with enough money and talent to do the job. I think they will eventually figure out how to make their own AI chips. Yes, but it will be a few years behind the advanced technology. That's how it is in the history of the world, every time there is a technological advance, you can't stop the bad guys from getting it, and then it's good AI versus evil AI. The way to stay ahead is to speed up progress, and the way to make faster progress is to open up research and let more people participate in it. How to define AGI? I don't like the term AGI because there is no such thing as general intelligence. Intelligence is not a linear thing that can be measured, and different types of intelligent entities have different skills. Once they get computers to human-level intelligence, they won’t stop there. With more knowledge, machine-level math, and better algorithms, they’ll create a superintelligence, right? Yes, there is no doubt that machines will eventually become smarter than humans. We don’t know how long it will take, it could be years or centuries. At that time, do we have to surrender? No. We will all have AI assistants, and it will be like working with a bunch of super smart employees, except they won't be human. Humans will be threatened by this, but I think we should be excited. What excites me most is working with people who are smarter than me, because it expands your own capabilities. But if computers achieve superintelligence, why will they still need us? There is no reason to believe that AI systems, once they become intelligent, will want to replace humans. It would be a mistake for people to think that AI systems will have the same motivations as humans. They won’t because we design them in. Source: Tuchong Creative What if humans don’t establish these goals, and a superintelligent system pursues a goal that ends up harming humans? Like the example philosopher Nick Bostrom gives: a system designed to make paper clips no matter what takes over the world in order to make more paper clips. It would be foolish to just build the system and ignore the safeguards. It would be like building a car with a 1,000 horsepower engine but no brakes. Putting goals in AI systems is the only way to ensure that they are controllable and safe. I call this goal-driven AI. This is a completely new architecture and we haven't seen any examples yet. Is this your job now? Yes, the idea is that the machine has goals that it needs to meet, and it can't produce anything that doesn't meet those goals. Those goals might include safeguards to prevent hazards from happening or other things, and that's what makes an AI system safe. Do you think you will regret the consequences of AI that you have contributed to? If I thought that was the case, I wouldn't do it again. You’re a jazz fan. Can anything AI produces rival the mind-blowing creativity that only humans have been able to produce so far? Can it create work with soul? The answer is complicated. Yes, AI systems will eventually be able to create music, visual art, or other works with technical quality similar to or better than humans. However, AI systems will not be able to create improvised music, which relies on human emotions and emotional communication. AI does not have this ability, at least not yet, which is why jazz needs to be heard live. You haven't answered me whether this kind of music has soul. You've got music that's completely soulless. It's the kind of music you hear in the background in restaurants, it's produced mostly by machines, and that's the market. But I’m talking about the pinnacle of the art. If I played you the best recording of Charlie Parker and told you it was generated by AI, would you feel cheated? Yes and no. Yes, because music is not just an auditory experience, a lot of it is a cultural experience, it's about admiration for the performer. Your example is like Milli Vanilli, where authenticity is an important part of the artistic experience. If AI systems were good enough to rival elite artistic achievements, and you didn’t know the stories behind them, the market would be flooded with Charlie Parker-level music, and we wouldn’t be able to tell the difference. I don't think there's anything wrong with that. I'll still buy the original, just like I'll still buy a $300 handmade bowl, even though I can buy one that looks similar for $5, but it still comes from a culture that's hundreds of years old. We still go to my favorite jazz musicians live, even though they can be imitated. The experience of an AI system is different. You recently received an honor from President Macron, and I can’t read the French… The French Legion of Honor (Chevalier de la Légion d'honneur). It was created by Napoleon. It's a bit like the British title of knight, but we had a revolution, so we don't call people "sir". Are there any weapons and equipment? No, they had no swords or anything like that. However, those who had such weapons could wear a small red stripe on their lapels. Can an AI model win this prize? Not any time soon, and I don’t think it’s a good idea anyway. Original author: Steven Levy Original link: https://www.wired.com/story/artificial-intelligence-meta-yann-lecun-interview/ Compiled by: Yan Yimi |
<<: Eating seafood and finding pearls at sky-high prices?! Do I still have a chance?
>>: The academician made a fool of himself, but the audience applauded. What happened?
"How can I tell my child that his future lif...
Foreign media earlier exposed the CES product lau...
The Shenzhou XIII spacecraft was launched in the ...
Humans have long used the naked eye and optical t...
□ Ruan Guangfeng Many people advocate starting th...
[[151640]] This article mainly talks about some p...
With the improvement of material level, people ha...
"I heard you snoring last night. You slept s...
Teach you how to sell courses on Douyin, monetize...
Apple's plans are indeed promising, but their...
Please read this article carefully! Then you will...
The biological blogger "Huzhang" who id...
How much does the Pinduoduo mini program cost? Ho...
Recently I heard a lot of people talking about pr...
Scarabs usually refer to the sacred dung beetle (...