For decades, futurists have been worried about computers gaining true human intelligence. From "2001: A Space Odyssey" to "The Terminator" to "The Matrix," movie after movie has repeatedly told the theme of intelligent machines causing havoc on humanity. By now, even serious thinkers are hearing the alarm bells about artificial intelligence—that robots and automation are already causing significant social and economic disruption. Two weeks ago, physicist Stephen Hawking told the BBC: "The development of full artificial intelligence could spell the end of the human race." "It will develop on its own, resetting itself at an accelerating rate," Hawking warned. "Humans, limited by the slow pace of biological evolution, will not be able to compete with it and will eventually be replaced." Elon Musk, the savvy entrepreneur who co-founded PayPal and currently serves as CEO of Tesla Motors (TSLA) and SpaceX, has also called AI "the greatest threat to our existence." "By developing artificial intelligence, we are actually summoning the devil," Musk said. In addition, two Oxford University researchers estimated that computerization will threaten nearly half of all jobs in the United States, and even some creative professional positions will not be immune. “Jobs requiring subtle judgment are also increasingly being computerized,” Carl Benedikt Frey and Michael A. Osborne wrote. “In many of these tasks, unbiased judgment based on algorithmic algorithms may offer advantages over human judgment.” Even investment reviews? Hopefully this won't put us out of work. So are the machines really taking over the world? I interviewed several of the industry’s leading experts in artificial intelligence research, and the answers I got made me feel a little bit reassured, but not too reassured. Artificial intelligence is developing, but some technical obstacles may slow its progress and prevent machine intelligence from making a big leap forward immediately. However, in time, intelligent machines will inevitably be able to perform increasingly complex mental calculations, and even completely replicate the highly complex operations of the human brain. IBM experts estimate that the human brain can perform 368 trillion calculations in one second. The combined computing power of all the world's supercomputers is equivalent to eight times that of a human brain, and the world's largest supercomputer in Guangzhou, China, has roughly the same computing power as a human brain. As Hawking pointed out, the exponential growth in computing power is due to Moore's Law - the number of transistors on an integrated circuit doubles every eighteen months. In 2007, Yale University economist William D. Nordhaus estimated that computers perform between 1.7 trillion and 7.6 trillion times better than manual calculations. “Computers are approaching the human brain in terms of complexity and computational power,” Nordhaus wrote. “Perhaps the computer will ultimately prove to be the ultimate outsourcing.” Peter Bock, professor emeritus of engineering at George Washington University, has been studying artificial intelligence for four decades. He said the trajectory of artificial intelligence's development is highly consistent with Moore's Law. In 1993, Polk published a book called The Emergence of Artificial Cognition, in which he created a fictional artificial intelligence machine called Mada that is similar to the human brain. He predicted that Mada would become a reality in 2024, and at present, everything is developing "precisely according to the scheduled schedule." In an email interview, he said that this intelligent machine will be able to learn new things so that its intelligence can continue to develop, "just like the process of human cognition." So what about human emotions? Polk's answer is: "That's the easy part, it's just feedback learning." However, intelligent machines will obviously be much smarter than Pavlov's dogs. In some areas, computers are already ahead of us. As Stuart Russell, a professor of electrical engineering and computer science at the University of California, Berkeley, points out, even if we spend our entire lives searching through billions of documents in less than a tenth of a second, we will never be able to find the answer that Google can in a fraction of a second. However, thinking machines are still unable to perform certain common sense tasks, as Google's self-driving cars have proven. "You have to decide quickly, or the car will run you over." This, and other technical limitations, may slow progress, but a lack of computing power is certainly not among the biggest obstacles. Russell noted that, at present, “we still don’t understand” how to program a thinking machine so that it can function like the human brain. “There are still many conceptual and technical problems to be solved in order to design algorithms that can produce intelligent behavior,” he wrote in an email. However, big technology companies such as IBM (IBM), Microsoft (MSFT), Facebook (FB), and especially Google (GOOG) are investing heavily in researching this issue. Recently, Google spent $400 million to acquire the British company DeepMind Technologies, which has attracted many AI elites. I believe in free enterprise, but given the potential dangers of AI, it's important to know who should be the final judge of the technology's development: companies driven by profits and stock market performance, governments, or someone else? Google has set up an ethics committee to monitor DeepMind - one of the company's founders, like Musk, has also said that artificial intelligence is "the number one risk of this century." But we must see, has Wall Street's risk management department ever prevented the financial crisis? Of course not. When a major breakthrough occurs with a halo of huge profits, ethics will definitely be thrown out the window. Google declined to comment for this article.Russell, who serves on the advisory board of the Centre for the Study of Existential Risk at Cambridge University with Hawking, believes scientists should "understand and eliminate potential risks". “As you develop more and more powerful AI capabilities, you also have to develop better and better control mechanisms,” he told me. "We don't really understand what we've done until we get there," Polk said. "The end of that road is as dangerous as a grave for us." What kind of monster have we let out of the bottle? I hope we won't learn our lesson the hard way. We have made this mistake too many times in the past. As a winner of Toutiao's Qingyun Plan and Baijiahao's Bai+ Plan, the 2019 Baidu Digital Author of the Year, the Baijiahao's Most Popular Author in the Technology Field, the 2019 Sogou Technology and Culture Author, and the 2021 Baijiahao Quarterly Influential Creator, he has won many awards, including the 2013 Sohu Best Industry Media Person, the 2015 China New Media Entrepreneurship Competition Beijing Third Place, the 2015 Guangmang Experience Award, the 2015 China New Media Entrepreneurship Competition Finals Third Place, and the 2018 Baidu Dynamic Annual Powerful Celebrity. |
<<: 8 key words for smart home in 2015
>>: The big screen is not the goal but a means. Where is the smart car heading?
In fact, I haven’t used QQ to chat for a long tim...
On October 15, according to foreign media 9to5Goo...
Now, there are more and more "new energy veh...
[[134122]] Cloud has become one of the hottest to...
This article is a summary of the results of a use...
Leviathan Press: Imagine if you were deprived of ...
According to the latest news from foreign media, ...
In the past week, AppSo reported many interesting...
Such changes show that Alibaba has finally figure...
Compiled by: Gong Zixin We know that genetics and...
Zhang Shanling's 21 Lectures on Alibaba's...
For a video website, there may be many labels att...
Brushing teeth and checking Weibo are the first t...
The third day of the first lunar month is also kn...