From "2001: A Space Odyssey" to "The Matrix", movies and science fiction writers have been reminding us to be wary of artificial intelligence, because it may betray or even destroy humanity at any time. As humans continue to make breakthroughs in the field of artificial intelligence, this concern is being mentioned more and more. Now, the five technology giants that are actively developing artificial intelligence, including Google's parent company Alphabet, Facebook , Microsoft , IBM and Amazon, have also realized this problem and are planning to develop a mature set of ethical standards for artificial intelligence. According to the New York Times, the five companies recently held a meeting to discuss the ethical standards of artificial intelligence in the labor market, transportation, and war. They also plan to establish a relevant industry association, the purpose of which is to ensure that artificial intelligence benefits humans rather than harms them, but the participants have not yet determined the specific name of the association. Although technology giants are actively researching artificial intelligence, there are also calls in the technology community to be wary of artificial intelligence dominating humans. For example, at the Code Conference held in June this year, Elon Musk, the founder of Tesla and Space X , said that with the development of artificial intelligence, humans will be left far behind in intelligence in the future and become pets of artificial intelligence. Last July, Musk also donated $10 million to the Future of Life Institute, which is responsible for assessing and analyzing the risks posed by artificial intelligence. Later, Stephen Hawking also claimed in a TV show that some laboratory somewhere in Silicon Valley was brewing an evil plan, and that artificial intelligence would evolve faster than humans, and their goals would be unpredictable to humans. "Once machines have reached a critical stage where they can evolve on their own, we will not be able to predict whether their mission will be the same as ours," Hawking said. In response, Google Chairman Eric Schmidt said that artificial intelligence will develop in the direction of human welfare, and there will be a system to prevent artificial intelligence from developing in the wrong direction. "We have all seen those science fiction movies," he said, "but in the real world, humans definitely know how to shut down its system when artificial intelligence becomes dangerous." Not only technology companies and civil organizations want to formalize the ethics and laws of artificial intelligence, but government agencies have also participated in this discussion. In May this year, the US government held the first White House discussion meeting on AI laws and regulations in Seattle. The theme of this symposium was "Should the government regulate artificial intelligence?", but the meeting did not reach a conclusion on this issue, let alone introduce specific regulatory measures. So, before the government took concrete action, tech companies took action - they decided to develop their own framework to allow AI research to proceed and ensure that it does not harm humans. Eric Horvitz, a Microsoft researcher who participated in this industry discussion, founded an association called "100 Years of Artificial Intelligence Research" at Stanford University, which recently published its 2016 research report. In the report, titled "Artificial Intelligence and Life in 2030," the authors argue that AI "is likely to be regulated." "The consensus among the research group is that attempts to regulate AI are often misguided because we do not have a clear definition of what AI is, and therefore different risks need to be considered in different areas," the report said. The report also recommends that government agencies increase professional awareness of AI and increase public and private investment in the field of AI. A memo about the meeting shows that the five companies will announce the establishment of an industry organization to regulate artificial intelligence in mid-September. People familiar with the matter said that Alphabet's subsidiary Google DeepMind requested to participate in this industry association as an independent. The association will be modeled after human rights groups such as the Global Network Initiative, but with a stronger focus on issues of free speech and privacy. As a winner of Toutiao's Qingyun Plan and Baijiahao's Bai+ Plan, the 2019 Baidu Digital Author of the Year, the Baijiahao's Most Popular Author in the Technology Field, the 2019 Sogou Technology and Culture Author, and the 2021 Baijiahao Quarterly Influential Creator, he has won many awards, including the 2013 Sohu Best Industry Media Person, the 2015 China New Media Entrepreneurship Competition Beijing Third Place, the 2015 Guangmang Experience Award, the 2015 China New Media Entrepreneurship Competition Finals Third Place, and the 2018 Baidu Dynamic Annual Powerful Celebrity. |
>>: With iPhone 7 about to be released, where will Apple go in the post-iPhone era?
Graduation season has arrived, and a group of gra...
How much does it cost to develop a mini program ?...
In this world, it is not easy for a person to mak...
We all know that many years ago, in the era of TV...
I have to say that the speed of development of th...
The four major pain points of cash loan business ...
Currently, the trend of online video advertising ...
Xiaohongshu’s strategy has changed. Last year, it...
The infinite monkey theorem states that if a monk...
Powerful smartphones hold too many secrets of thei...
As the 2022 National People’s Congress and the Ch...
Can string theory save the universe from a devast...
Douyin's Juliang Qianchuan connects Douyin st...
In the last issue, we shared how to judge pelvic ...
The influence of brands has been weak for a long ...