Translation/Chilongfei, Yang Hanshu, Viola, Wei Pang, Zhao Saipo, Zhao Yunfeng Elon Musk recently publicly stated that artificial intelligence is like "summoning the devil" and used science fiction movies to illustrate the dangers of artificial intelligence. However, in the author's opinion, this is very irresponsible and misleading to the public, and it also conveys an unfounded fear, which will cause serious damage to the current specific work in the field of artificial intelligence. The author strongly refutes Musk's remarks and provides a very complete argument for the risk assessment and development of artificial intelligence. Elon Musk has shortened the distance between the future and the present. His SpaceX has successfully completed space exploration missions, and Tesla has played a key role in the electric car industry. So when Musk talks about the future, people always listen. That's why his recent remarks on artificial intelligence have attracted so much attention. Musk has been escalating his rhetoric about the direction of AI research. Just recently, he described current AI research as “summoning the devil” and jokingly called the supercomputer HAL9000 from the movie 2001: A Space Odyssey a puppy, comparing it to future robots. Some time ago, he explained that he got involved with the AI company DeepMind because he was eager to keep an eye on the progress of a possible “Terminator” scenario. Such talk generally does more harm than good, especially for those who are hardcore Musk fans. Fundamentally, Musk’s statements are hyped and publicized, and when such negative publicity is related to specific research work, it can have very negative and harmful consequences. As Gary Marcus said in a feature report in the New Yorker magazine last year, the past few decades have been plagued by the fickleness of public attention, rampant speculation, and the continuous abandonment of research pioneers. This phenomenon is called the "AI winter" - researchers are unable to deliver on their previous promises published in the media after burning through investment. This process has been repeated in the history of artificial intelligence. As Daniel Crevier (Canadian entrepreneur, artificial intelligence and image processing expert) described in a 1993 book on the history of artificial intelligence, perhaps the most infamous propaganda in artificial intelligence research (that is, the claim that artificial intelligence will defeat humans) occurred in the 1970s, when DARPA (the Defense Advanced Research Projects Agency) funded many similar projects, but the ultimate results did not meet the expectations. Yann LeCun, head of Facebook's AI lab, concluded on Google+ last year: " Some false propaganda is very dangerous for artificial intelligence. In the past 50 years, artificial intelligence has sunk four times due to false propaganda. The hype about artificial intelligence must stop. " If we can't build a fully functional self-driving car in five years as Musk wishes, what impact will it have on this field? So, forget the so-called Terminator, we should reconsider how we talk about artificial intelligence. Musk's scathing comments about AI are misleading only to those who are less familiar with it than he is. In fact, our smartest AI products are only comparable to the intelligence of a toddler, and this is limited to tasks that require external help (such as information recall). Most machine scientists are still struggling to make robots able to pick up a ball and run without falling, which seems to be a long way from the final outcome of "Skynet". Indeed, the European Human Brain Project and the US BRAIN Initiative have spent a lot of money to find a "digital map" of the human brain, but they have a long way to go, and most of the technology supporting the research results is still in the research and development stage, such as nano-biosensors that can better scan the brain. Of course, this is not to say that these studies are completely meaningless. This work is certainly meaningful, and these studies do not promote so-called fear to the public. “When we hear these scary things about AI, it feels like the kind of fear that comes from aliens and science fiction, where aliens show up and they’re three times smarter than us,” Selmer Bringsjord, director of the Artificial Intelligence Laboratory at Rensselaer Polytechnic Institute in New York, told me. “Government-backed AI researchers are starting to have these same movie-like fears. It’s not about the research that we’re doing, it’s about a deeper, irrational fear.” What Musk and Hawking fear most is the "existential risk" of AI. Existential risk refers to risks that threaten the entire human race or destroy us completely. The term was first coined in 2001 by Nick Bostrom, director of the Future of Humanity Institute (FHI, Oxford) at the University of Oxford, and has been popularized ever since. Musk is a fan of Bostrom's recent book, Superintelligence. Daniel Dewey, a scientist at the Future of Humanity Institute at the University of Oxford, said that " existential risks" could be war, widespread wealth inequality, natural disasters, and malicious artificial intelligence. There is one obvious fact that Musk did not understand, or at least did not express it - unlike other risks, the risks posed by artificial intelligence need to be treated separately when evaluating them, because the current discussion of this risk is largely speculative and philosophical, with no unified path to follow, and the correctness of these inferences can only be verified in the distant future. Even so, the "intelligence explosion" in the eyes of researchers at the Future of Humanity Institute will only occur when machines suddenly have intelligence beyond our expectations. “ I think we need to be very mindful of how much we don’t know about the risks of AI, ” Dewey said. “Most of our current assessments of AI are discussions of possibilities rather than rigorous calculations—for example, we can talk about the possibility of an intelligence explosion, but we can’t be 100% sure of that probability, nor can we predict when it will happen. Like Hawking, Musk thinks that a regulatory board should start monitoring the development of AI, lest we do something "really stupid," like build a robot that kills us. But we seriously doubt what an AI board charged with enacting public policy could do at this point. Sure, there's merit in having a set of rules for robots that already have some intelligence, as Ryan Calo, a professor at the University of Washington School of Law, has proposed, like the self-driving cars, killer drones, and war robots we already have. But it's another story entirely for strong AI, which is still just out of reach. "The dangers are so great that AI may make the wrong decision in certain specific situations. For example, a self-driving car will only consider the more realistic and immediate risks, such as the recent well-known ethical dilemmas, so when crashing the car is the most realistic and reliable option, the machine will do so without hesitation." "I think AI is a math and engineering problem, not a public policy problem," Bringsjord said. "If we set up a commission, we should focus on the engineering aspects of AI." According to Bringsjord, good engineering design should be the first step to regulate the potential social ills of artificial intelligence before public policy is adopted. This is not only because artificial intelligence is new, but also because a rigorous design process can mitigate the ills that may occur in computer programs. One way to solve this problem is to resume research on formal software certification, which can accurately detect defects and potential errors in programs, reducing the risk of malfunction to almost zero. Research activities in this area have disappeared in the past as funding has declined, but DARPA has begun to restart this research through open source and crowdsourcing. “We’re more willing to pay for the engineering that’s necessary than for these kinds of regulatory boards,” Bringsjord told me. “As with the New Orleans flood, engineers explained to us in detail how to protect New Orleans from flooding—but others thought it would cost too much. So, in essence, reckless penny-pinching is the real threat, not AI.” But it’s the inherent potential dangers of advanced AI that seem to be the main focus for Musk and others. The dark prediction that technology risks destruction not only falsifies what AI can actually do in the foreseeable future, but also muddies the current discussion of AI. For example, unemployment caused by technological development. The development of artificial intelligence and portable robots under the control of capital is completely changing the world, replacing countless workers, and completely failing to reflect any future progress. With the continuous advancement of drone technology and the expansion of its application range, military robots, although without the support of strong artificial intelligence, can also complete killing tasks on their own, which has become a highly controversial topic, but those organizations calling for a halt to such development have been directly ignored. In fact, we don’t need such a technological breakthrough—AI becoming as smart as humans—because our world will inevitably be changed by machines and software, for better or worse. This is not to say that we shouldn’t consider the future applications of advanced AI, but we should be more responsible, especially when facing the public. Musk's comments have added fuel to the media hype, fueling unsubstantiated fears that are simply the result of overhyping a technology that is still in its infancy, no matter how impressive its daily progress may be. Self-driving cars, smart security guards, robot coworkers... these possibilities are scary, but also wonderful. This is our future, and AI will never make it crazy. This article was originally published in Vice, written by Jordan Pearson and translated by Machine Intelligence. Adding my personal WeChat account "jiqizhixin2014" as a friend can more conveniently view historical articles and apply to join the Synced Communication Group. When applying, please submit a personal profile and personal insights that are in line with the Synced Communication Group theme. Synced hopes to think and find the common future for humanity amidst the dazzling technology. |
<<: NetEase enters the enterprise application market with Youdao Cloud Collaboration
>>: Lei Jun handed over the living room to Chen Tong
In fact, it is not too difficult for a company to...
Recently, the "2021 Annual Report on New Tax...
On September 14, the "Third Electric Vehicle...
Every time there is a big sale, major online stor...
The world is so big that there are all kinds of s...
Lions, tigers, bears, when these animals roar, th...
As OPPO's first 5G product in China and also ...
Since 2019, the sun has entered its 25th activity...
Xiaoheilong Volume Price School, Volume Column, V...
Regarding the title , someone always tells me, &q...
Frog jumps, backflips, kick steps... Recently, in...
According to foreign media reports, Alfonso Albai...
Why join the WeChat Mini Program Development Comp...
[[131206]] According to InfoWorld, after Microsof...