Kurzweil has written five books on artificial intelligence (AI), one of which is the New York Times bestseller How to Create a Mind. Two great thinkers saw the dangers of AI. Here's how to make it safe. Renowned physicist Stephen Hawking[1] recently warned that if artificial intelligence surpasses human intelligence, it will pose a threat to the existence of human civilization. Elon Musk, a pioneer in digital currency, private space flight, and electronic cars, has expressed similar concerns. If AI becomes an existential threat, it won’t be the first. As a child, I sat under a table during a 1950 civil-raid drill. Since then, we’ve encountered comparable fears, such as bioterrorists creating a new virus that humans have no resistance to. Fire can warm us or burn our villages; technology is always a double-edged sword. In typical dystopian futuristic movies, there are always one or two protagonists or groups fighting for control of artificial intelligence. AI is being integrated into the world today. AI is not one or two hands; it is a billion or two billion hands. An African child with a smartphone has access to more knowledge than the president of the United States 20 years ago. As AI gets smarter, its uses are still growing. In fact, everyone's intelligence has been amplified by AI in the last 10 years. We still have conflict among people, each amplified by AI, as has been the case. But we can take comfort in the fact that violence is declining profoundly, exponentially, as Steven Pinker’s 2011 book, The Better Angels of Our Nature: Why Violence Is Declining, suggests. According to Steven Pinker, while statistics vary by location, the death rate from war is hundreds of times lower than it was six centuries ago. Murders have fallen tenfold since then. People are astonished. The impression that violence is on the rise stems from another trend: the exponentially greater abundance of negative information about the world—another development enabled by AI. There are strategies we can deploy to keep emerging technologies like AI safe. Consider biotechnology, which is perhaps decades ahead of AI. The 1975 Asilomar Conference on recombinant DNA assessed its potential dangers and mapped out strategies to keep the field safe. Since then, the best practices have been refined by industry, and they work remarkably well: there have been no major problems, accidental or intentional, in the past 39 years. We are now seeing tremendous progress in drug therapies reaching clinical practice, with far fewer problems than expected. The ethics of AI can be traced back to Isaac Asimov’s 1942 short story “Runaround”[2]’s three laws of robotics[3], and eight years later, Alan Turing’s 1950 paper “Computing Machinery and Intelligence” on AI. The common view among AI practitioners today is that we are still decades away from achieving human-level AI. I am more optimistic and put the time at 2029, but there is still time to plan ethical standards. There are already efforts by universities and companies to promote AI safety strategies and guidelines, and some are already on track. Similar to the guidelines of the Asilomar conference, one idea is to clearly define the mission of each AI program and establish cryptographic security protections to block unauthorized users. Ultimately, the most important way we can ensure AI safety is through commitment to human governance and social regulation. We have already reached the age of human-machine civilization. The best way to avoid negative conflicts in the future is to continue to develop our social ideals, which greatly reduce violence. Today, AI is advancing disease diagnosis, finding cures, developing renewable clean energy, helping clean the environment, delivering high-quality education to people around the world, helping people with disabilities (including Stephen Hawking's voice), and contributing in countless other ways. In the coming decades, we have the opportunity to make a giant leap in addressing humanity's great challenges. In controlling this danger, we have a moral responsibility to deliver on that promise. This will not be the first time we succeed in this regard.
Original English text: http://time.com/3641921/dont-fear-artificial-intelligence/ Translation from: http://www.labazhou.net/2014/12/dont-fear-artificial-intelligence/ |
<<: Mailbox APP Mailbox Master crosses borders with COSTA Coffee to leverage each other's warmth
>>: WeChat Tips: From XX Challenge to XX Red Packet, the Next Hot Spot in WeChat Moments
Let me explain it simply and literally. What does...
Documents from Edward Snowden released by The Int...
As far as the product operation position of B-sid...
On January 16, Changan Automobile held a global p...
On September 30, as the 2023 Nobel Prize is about...
This article records the basic configuration and ...
Friends who have just used DedeCMS to build websi...
In the recently released Android Wear 2.0 develop...
Only by using fewer "tricks" can you tr...
There are many diseases that are caused by viruse...
When you develop an app, you usually have several...
There are indeed some mobile phones with large-ca...
Image source: Tuchong Creative How can I tell the...
Hello everyone, I am Xiaoxiao who is challenging ...