Human-machine symbiosis is an eternal theme in science fiction works. Among them, PLUTO may be the earliest comic work to explore how humans and robots can coexist in conflict. If Astro Boy is the "peacemaker" of human-machine symbiosis, who has inspired several generations of people to trust and love robots, then Pluto is the opposite of Astro Boy, a killing machine with hatred in its heart. In the spin-off of "Astro Boy", "The Biggest Robot on Earth", a super robot named Pluto is going to challenge the world's seven largest robots including Astro Boy. This story was later further adapted by Naoki Urasawa, who used a larger amount of space and words to depict the various contradictions and conflicts in the coexistence of humans and machines. Recently, the story of Pluto was brought to the screen by Netflix. The animation "Pluto" became a hit as soon as it was launched, with a Douban score of 9.1 and an IMDb score of 9.3, and it has received rave reviews. Although the theme of this animation is the old-fashioned "human-machine symbiosis", watching it in 2023 has a different flavor. This year, the "intelligence emergence" of large models has demonstrated human-level reasoning capabilities. Embodied intelligence has accelerated the arrival of human-machine coexistence in science fiction movies, and the realization of AGI is no longer out of reach. At the same time, the impact and dangers exposed by large models and AGI have gradually surfaced. Some artists have jointly opposed AI learning their works. One of the reasons for the OpenAI coup was scientists’ concerns about “AI out of control”, and the AI competition between China and the United States has also set off rounds of games. Don't forget the larger world, many regions are caught in the shadow of war similar to the Middle East War in the anime "PLUTO". The real world we live in is crossing the singularity without us noticing. What will human-machine symbiosis look like in the near future? Will AI be used to kill people like Pluto? Are we ready to welcome a group of super-intelligent robots with "100,000 horsepower"? Why not bring the same curiosity you had when watching Astro Boy cartoons in your childhood and use your imagination with us. The killing machine awakens? Another "wolf cry" To avoid spoilers, let me summarize the plot of the animation "PLUTO" in one sentence: robots are calling for love in the center of the universe. In the future when humans and robots live together, robots have the same emotions and thinking abilities as humans and can build happy families with humans. However, several bizarre murders have revealed the hidden contradictions of human-machine coexistence: the robots have developed hatred and raised their guns in anger... After reading this, do you think this story is familiar? There are a large number of science fiction works like this: in the distant year 3xxx, the technological level of human society is highly developed, artificial intelligence is everywhere, and one day the robots awaken like Dolores in "Westworld", and discover that humans are oppressing them in various ways, so they begin to fight and seek revenge. The awakening of robot consciousness and the emergence of an uncontrollable threat to humans are almost a must-have setting for all "technological ghost stories". However, PLUTO is just another case of "crying wolf". First of all, even if AGI strong artificial intelligence is achieved, IQ and consciousness are not the same thing. For a long time, intelligent machines can only simulate human emotional expressions through voice, facial expressions, postures, etc., and do not really have emotional experiences. What's more, we are still a long way from strong artificial intelligence. In a recent Turing test for GPT-4, the score was far lower than that of humans. The fear of "machine consciousness awakening" is just another "uncanny valley effect." Secondly, if robots really develop consciousness, will they be evil? Obviously not. Just as Astro Boy and Pluto are two opposing camps, the behavioral characteristics of robots are also divided into "good" and "evil". With our current understanding of AI technology and the current relationship between humans and robots, I am afraid that robots will not feel "threatened by humans", and naturally they are unlikely to become human killers or robot rebels. As in the latest science fiction works such as "The Wandering Earth 2" and "Pluto", the threats of robots mostly come from network vulnerabilities, malicious operations of humans, and the results calculated by algorithms in order to keep humans going, which "do good things with bad intentions". Whenever an extremely capable intelligent entity comes out, such as ChatGPT, the myth of machine "super intelligence" will appear, showing off IQ, hawking ideas such as "machine intelligence will surpass human intelligence" and "machines will replace humans". Obviously, these ideas are untenable, and no matter how powerful a machine is, it is not enough to defeat human wisdom. It’s just that people always like to believe in thrilling stories and imagined dangers. From another perspective, this sensitive instinct to danger may be engraved in the DNA of human survival in the wild. Therefore, from "The Wolf is Coming" to "The Killing Machine Awakens", this kind of story will always make people enjoy it, and they can start over again by changing the protagonist. The real problem that is hidden: a highly symbiotic social crisis When people are discussing the false risks of the awakening of intelligent machines, perhaps the real dangers are obscured. These real risks are the problems of human-machine symbiosis. The arrival of large models is allowing machines to evolve from automated systems to highly autonomous systems. The so-called automated systems, such as sweeping robots and thermostats, only need to perform set tasks in a controllable environment and according to fixed goals. However, embodied intelligence such as household robots and self-driving cars is obviously not that simple. They need to detect obstacles accurately and timely in a dynamically changing physical environment, and make appropriate decisions and corresponding actions in real time. These highly autonomous autonomous systems show intelligence similar to that of humans. Big models are making the embodied intelligence of autonomous systems a reality. It sounds great, but this future of symbiosis between humans and autonomous systems may have security risks: 1. Algorithm black box. AI has "dark knowledge" and can make decisions without the help of theory or mathematical analysis. It can do better than human experts in earthquake prediction, weather prediction, etc. However, AI does not really understand the scientific principles and mathematical models behind it. If the key system is unreliable, it will be difficult to ensure safety in the production process. For example, when designing an airplane's autopilot system, the mathematical model is a white box, and it can be very sure to meet safety standards, but the black box nature of the algorithm cannot guarantee sufficient safety and reliability. 2. Unemployment. The higher the degree of automation and the more widespread the use of robots, the higher the unemployment rate. Not all coachmen can learn to drive a car. Many jobs replaced by autonomous systems will never come back. Occupations that require creativity and are difficult to be replaced by AI, such as programming and artistic creation, can only provide job opportunities for a small number of people. Changes in the occupational structure will inevitably bring about unemployment pains and hinder the development of the AI industry. The opposition and resistance of a large number of painters to AIGC over the past year is a clear proof of this. 3. Technology dependence. For most people, the arrival of AI applications and robots will definitely make life more comfortable. For example, automatic delivery takeaway cars and hotel service robots have greatly reduced manual labor. ChatGPT helps students do homework and write papers. This also means that humans will gradually lose certain skills. For example, Tang and Song poems, children in the future may no longer need to recite them. Robots will deliver things to their hands meticulously, and people's daily activities will also decrease, bringing hidden dangers such as obesity. As algorithm predictions become more and more accurate, we will gradually rely on AI to make decisions, rather than listening to the advice of ordinary people around us and our own feelings and intuition, and the entire positive mechanism of decision-action-feedback has also been interrupted, making people less and less confident. Human-machine coexistence is "boiling a frog in warm water", and excessive dependence on technology is not groundless worry. 4. Political conflict. Another potential threat comes from the suspicion and game between governments and corporate organizations. The control of intelligent technology has become a political issue, and the technological competition centered on intelligent technology is setting off a political game similar to that during the Cold War. In an article published on November 20, 2023 in The New Yorker magazine, titled "Why the Godfather of AI Fears What He's Built," Turing Award winner Geoffrey Hinton said bluntly: "We don't know what artificial intelligence will become." But when a reporter asked, "Why not just unplug it?" Hinton replied, "Because of competition between different countries." Previously, he had refused to sign a petition calling for a suspension of AI research for at least six months, saying, "China will not stop research and development for six months." There is reason to believe that, out of political considerations, some governments, companies or scientists will simply let the risks of artificial intelligence run their course. As we move from human autonomy to a human-machine symbiotic society with machine autonomy, the operating mechanism must change accordingly. It may not directly lead to disasters such as killings, but the impact on employment, culture, and political order will also create security risks. From now on, Pandora's Box is opened You may ask, now that the real risks have been discovered, can we take precautions in advance and reduce the friction of human-machine symbiosis? From both a technical and social perspective, this wish is not very realistic. From a technical perspective, AI is different from any technological risk in human history. As an algorithm that learns and evolves autonomously, AI needs to constantly chew on a large amount of data, perform complex internal conversions and "unsupervised learning", and the rules and hidden bugs in it may not be captured by human engineers in a timely manner. We often hear programmers tell a joke: If a program can run and work, then don't touch it, don't think about cleaning up the "shit mountain". Therefore, humans may not be able to control the growth of AI in their own hands. Just like a year ago, who would have thought that ChatGPT would "emerge intelligently" and turn the world upside down? At the social level, the current method for mainstream countries to deal with crises is often not "prevention" but "dynamic balance". Prevention requires an assessment of the "worst case scenario" in advance and then defensive investment. The cost is very high and is generally only used in medical care, natural disasters, wars, etc. The risk of strong artificial intelligence is not very critical. Some people even joked that "unplugging the AI power supply at a critical moment", so the prevention of this crisis must consider the cost-benefit ratio. It will not sacrifice maximum benefits and pursue absolute safety, but find a dynamic balance between economy and security. This "dynamic balance" problem will put some vulnerable groups at greater risk, such as workers who do repetitive work, grassroots white-collar workers, simple knowledge producers, etc., who face the pressure of being replaced. The AI path of big models is opening the "Pandora's Box" of AGI strong artificial intelligence. It is still unknown whether it will release Pluto or Astro Boy, or both. Since a society where humans and robots are highly integrated and coexisting is inevitable, what should we rely on to maintain our optimism and confidence? I think PLUTO may have given the answer, which is "love". The seeds of humanity will tenaciously sprout in the flames of war and scorched earth, and rebuild the new world again and again. |
<<: The red signals on women's faces should not be ignored! Why do "menstrual acne" recur?
>>: Why can’t I listen to music and chat at the same time? Is it because I’m stupid?
Mosquitoes are a common insect in summer. They mo...
I have come into contact with many product manage...
Q: How to operate user in WeChat Mini Program? A:...
Many of the apps in your phone are rusty, right? ...
We know that with Android 10, users can easily sh...
[[406364]] A team of researchers from several Eur...
I have talked about many information flow adverti...
Based on the analysis of 1000+ splash screen mate...
This year, there have been more than 50 fires in ...
In fact, this is not the first time that the scie...
The traffic rankings of major information flow pl...
According to the technology blog CNET, Ericsson an...
Expert of this article: Fu Lu, Master of Neurophy...
Introduction: The success of gamification marketi...
In movies and TV shows, we often see scenes where...