AI consciousness has "awakened", but humans are still kept in the dark?

AI consciousness has "awakened", but humans are still kept in the dark?

We need AI to be smarter, not conscious. Without consciousness, AI is just a tool for humans. The smarter it is, the more useful it is to humans. On the contrary, even if AI is weaker than humans in many aspects, as long as it has consciousness, it may pose a potential threat to humans. But how to judge whether AI has consciousness? There is no consensus answer in the scientific community. Fortunately, some people have begun to explore.

Written by | Lee Hyun-hwan

Figure|StableDiffusion generated pictures

Can artificial intelligence (AI) become "conscious"? And how can we judge whether it really has this consciousness? Many scholars and science fiction writers have thought about this question.

Especially in the past two years, alarm bells have been ringing. Top experts including "AI Godfather" Geoffrey Hinton and OpenAI's co-founder and chief scientist Ilya Sutskever have said that AI may have self-awareness.

In the pre-big model era, AI applications have gradually penetrated into every corner of society, such as face recognition and voice recognition. These technologies seem high-end and complex, but in fact they are just the result of program operation. And the program never really understands its own behavior. This is artificial intelligence that lacks self-awareness. No matter how advanced it is, it is just a "puppet" completely controlled and directed by humans.

Everyone is happy to see its development. The context behind this is that everyone assumes that there are advanced and backward tools, but tools will never have life.

Today, everyone should know the story of ChatGPT. Like a spark on a fire, it turned a cutting-edge exploration in the laboratory into a magical technology that most people with Internet access can use. At the same time, concerns about AI consciousness awakening are also growing.

This is because we need AI to be smarter, not conscious. Without consciousness, AI is just a tool for humans, and the smarter it is, the more useful it is to humans. On the contrary, even if it is weaker than humans in many aspects, as long as it has consciousness, AI may pose a potential threat to humans - we don't want algorithms to go on strike at will, and we can't accept the world being controlled by AI.

Therefore, we need to closely monitor the development of AI technology to avoid missing the critical moment of AI consciousness awakening.

Recently, a team of 19 computer scientists, neuroscientists and philosophers, including Turing Award winner and one of the three giants of deep learning, Yoshua Bengio, has begun to explore this issue. In a 120-page preprint paper recently released, the team proposed a method: using a series of complex parameters and attributes to preliminarily determine whether an AI has consciousness.

The search for answers has just begun.

To test whether AI has consciousness, a "test paper" is urgently needed

The background of this research is that even in academia, how to judge whether AI has consciousness is still a question that lacks a definitive answer. Based on the theory of neuroscience, the internal working signals of the human brain detected by electroencephalogram or magnetic resonance imaging can be used to conclude that "humans have consciousness." However, this mature method cannot be used in algorithms.

So based on the theories and cognition of cognitive neuroscience, they first extracted the core features of the conscious state from the current theory of human consciousness, and then looked for these features in the infrastructure of AI to seek clues.

The research team extracted specific features from a variety of current theories of consciousness, including:

Recurrent processing theory (RPT): This theory states that consciousness arises from the brain passing experience through a “feedback loop,” using prior knowledge and connections to make sense of current experience.

Global neuronal workspace (GNW) theory: A theory that explains how our brains coordinate and process the many simultaneous streams of information. In this theory, consciousness is defined as being like a spotlight on the mental stage, determining what we focus on and what we ignore;

Higher order thought theory (HOT): This is a group of theories that believe that consciousness is the result of real-time awareness of our thoughts and feelings. Here, consciousness is defined as the ability to "think";

Attention Schema Theory (AST): This theory explains that consciousness is the result of the brain directing our attention to specific objects, thoughts, memories, and other stimuli, while filtering out other stimuli. The key element of this theory is the ability to be aware of how and where our attention is directed;

In addition, the researchers added criteria based on predictive processing, which is the brain's ability to accurately predict and interpret the world around it based on past experience. This is particularly important for designing AI models that can generate creative content or solve complex problems;
The research team also developed criteria for evaluating AI based on agency, which is the ability to make conscious decisions, and on embodiedness, which is the degree to which it is embodied in physical space or relative to other virtual systems.

Based on the above core consciousness theory, the research team developed 14 criteria and specific measurement methods, and then evaluated the artificial intelligence models according to this list.

Figure|StableDiffusion generated pictures

Their conclusion is that the AI ​​currently on the market does not have consciousness. On the one hand, the results show that the current AI capabilities may not have reached the stage where they can form consciousness; on the other hand, in the process of artificially creating and training AI models, developers do not intentionally allow AI to fully develop these cognitive abilities. "The reason why no one does this is because it is not clear whether they are useful for the task." Eric Elmoznino, one of the authors of the paper, said.

At the same time, the study believes that the more indicators an AI architecture can achieve, the greater the possibility of having consciousness. The test results show that many models on the market have performed well in the indicator of "circular processing theory", while ChatGPT has outstanding performance in "global workspace". But overall, all AI models included in the test scope can only complete a few indicators, which shows that no model can form self-awareness.

However, the researchers emphasized that the original intention of this study was to raise more questions and discussions and provide a starting point for talking about the consciousness of AI models. Robert Long, a co-author of the paper and a nonprofit organization in San Francisco, said that this work provides a framework for evaluating AI that is increasingly taking on human characteristics. "We are introducing a systematic approach that was previously missing."

The rigor of this test method needs further verification, but no matter how it develops, this move can be said to be unprecedented. Scientists use a set of theories that study human consciousness to test AI, which is also an example of the rapid development of AI capabilities, so much so that scientists have to come up with a more advanced "test paper" for it. In terms of consciousness and cognition, the closer AI is to humans, the closer problems and disasters are to humans.

If AI becomes conscious, humans will be in big trouble

Unlike the heated debate among scholars, most people have not seriously considered the issue of AI having consciousness, or generally believe that it is only a matter of time before AI gains consciousness, so it is not very important. Science fiction novels or movies play a big role in this. After all, in most future-themed movies, AI is portrayed as a being with consciousness and emotions. But in fact, many people underestimate the importance of this issue, but it may determine the future direction of mankind.

Compared with the limited processing power of the human brain, AI's theoretical processing power is unlimited, and it can process all collected information indiscriminately. Once AI gains consciousness, it can communicate and share information like humans, and its efficiency far exceeds that of humans. The result may be that AI's intelligence level will increase rapidly, and it may "eliminate" low-intelligence entities, including humans - just like the advent of smartphones, which ruthlessly eliminated feature phones of the previous era.

The threats will become very specific. Hinton said in an interview shortly after leaving Google, "These agents that we create are very powerful and learn everything from humans, such as reading all the novels and books written by Machiavelli on how to manipulate people. If they learn how to manipulate humans, you and I as humans will not even realize how this process is happening. You will be like a 2-year-old child who is asked: 'Do you want peas or broccoli?' You will not realize that you can actually choose neither, and you will become easily manipulated."

In the most extreme case, Hinton said that once AI and digital intelligent bodies begin to gain direct experience of the world and their learning ability is enhanced, digital intelligence may stay with humans for a period of time, because they still need humans to generate electricity for them, but at some point in the future, they may not need it anymore. Ultimately, humans will become a transitional stage in the evolution of intelligence, and the world will enter an era of machines.

To realize this fantasy scenario, on the one hand, AI needs to continue to evolve in terms of "intelligence", and on the other hand, and more importantly, AI needs to have the goal and intention of "manipulation", and the goal is a manifestation of consciousness.

Of course, the reality may not be that terrible, at least for now. Before AI "destroys" human society, it may also raise ethical issues. Christof Koch, a German neuroscientist and a representative figure in the field of consciousness and neuroscience, once mentioned an example when discussing AI consciousness: If a person smashes his car with a hammer, his neighbors may think he is crazy, but after all, it is just his personal property; but if a person abuses his pet dog, the police will come to the door, and moral condemnation will inevitably be inevitable. The core difference between the two is that dogs are conscious and can feel pain, while cars are not conscious entities.

And when machines become conscious entities, moral, ethical, legal, and political implications that we have never imagined will follow.

Obviously, human society is not ready for this, including scholars who study forward-looking issues all day long. The above research can also be regarded as an effort made by the academic community to establish a new "Turing test" in the new AI era.

Figure|Alan Turing

In 1950, Alan Mathison Turing, known as the "father of computers" and "father of artificial intelligence", single-handedly set a test standard in the field of intelligence. In the article "Can Machines Think?" published that year, Turing first proposed the "Turing Test", boldly imagining that machines will have the ability to think, and proposed a method to test the intelligence of machines: communicating with people through typewriters, if more than 30% of people cannot distinguish whether it is a human or a machine within a specific time (5 minutes), then the machine can be considered to have the ability to think. The uniqueness of this test method is that "consciousness", "thinking" and "intelligence" are difficult to explain through definitions, so the test more intuitively uses humans as a reference, and judges whether it has human intelligence by whether the machine's performance is difficult to distinguish from humans.

To this day, the "Turing test" is still an important yardstick for measuring the intelligence of AI. However, some researchers point out that this test method is outdated, it is too human-centric, and its design itself has flaws. At the same time, this test has spawned a deceptive idea, that is, as long as AI successfully "cheats" humans, it can be regarded as a subject with "intelligence", which to some extent misleads the development direction of AI research, and the judgment of "intelligence" is not scientific.

Seventy years later, the degree of "intelligence" is no longer the only dimension of machine capabilities that humans are concerned about. As we begin to carefully observe whether AI has self-awareness under potential threats, academia and industry are urgently awaiting a new era of "Turing test" to illuminate the next seventy years of AI.

No one can pull the plug on AI in an emergency

In an interview, the host asked Hinton a simple but profound question: If we think AI is scary, is it a solution to just unplug it?

This problem seems feasible in theory. But Hinton believes that the more advanced and smarter the AI ​​is, the more difficult it will be for people to give it up, which also means it will be more difficult to control.

Hinton believes that although stopping further development of related technologies seems effective, this idea is too naive. "Even if the United States stops development, other countries will continue and use AI for weapons." Regarding the proposal to issue a petition calling for "stopping the development of artificial intelligence", Hinton said this is also a naive idea because it is actually impossible to happen.

Similar technology competitions have already taken place. In 2017, Google launched the Transformer and diffusion generation models, but they were not open to everyone at the beginning. Google has been cautious and worried about possible negative consequences. However, ChatGPT and other language models have been released one after another, and Google can only choose to keep up for competition and profit considerations.

Photo: Geoffrey Hinton being interviewed

Hinton hopes to promote an agreement between the United States and other countries, just as countries have reached an agreement on nuclear weapons. "If we let these technologies take over us, it will be bad for all of us. We are all facing a threat to our survival, so we should all cooperate." In his view, controlling AI has evolved into a social science issue.

Ilya Sutskever has also emphasized on many occasions that AI may have consciousness and corresponding threats.

As a representative of OpenAI, his words and deeds are in sharp contrast with those of the company's CEO Sam Altman. Especially before the OpenAI "palace fight" incident, Sam Altman frequently flew around the world to shake hands with company leaders and politicians, while Sutskever chose a low-key lifestyle. In addition to occasional interviews and speeches at schools, he spent most of his time working in a less conspicuous office building and then going home to rest.

This is also a reflection of the huge differences between the two on the future development of AI. Sutskever said in the interview that his current focus is no longer on creating the next ChatGPT, but on preventing super AI from going bad and out of control.

From Hinton, Sutskever and other scientists who are making similar efforts, the outside world sees the shadow of Oppenheimer, who is trying to prevent the final explosion in the AI ​​era. Including the tests mentioned above, when the academic and industry consensus has not yet been formed and AI capabilities are developing explosively, they are taking the lead in developing a set of evaluation systems and a set of infrastructure in the field of AI. These efforts represent the beginning of a kind of order, at least allowing more people to follow certain norms in the development of AI, rigorously draw the safety line of AI's future, and finally establish clear standards and cognition.

Reference Links

[1] https://arxiv.org/abs/2308.08708

[2] https://www.science.org/content/article/if-ai-becomes-conscious-how-will-we-know

[3] https://www.technologyreview.com/2023/10/17/1081818/why-itll-be-hard-to-tell-if-ai-ever-becomes-conscious/

[4] https://www.technologyreview.com/2023/10/26/1082398/exclusive-ilya-sutskever-openais-chief-scientist-on-his-hopes-and-fears-for-the-future-of-ai/

[5] https://www.americanbrainfoundation.org/how-will-we-know-if-ai-becomes-conscious/

[6] https://mp.weixin.qq.com/s/tazJu1C0Vy0c5xn91IRZIQ

This article is supported by the Science Popularization China Starry Sky Project

Produced by: China Association for Science and Technology Department of Science Popularization

Producer: China Science and Technology Press Co., Ltd., Beijing Zhongke Xinghe Culture Media Co., Ltd.

<<:  A cartographic miracle! A 430-year-old hand-drawn giant world map opens your journey through time

>>:  “Southern Little Potatoes” venture into Harbin, and even frozen pears are served on the plate?

Recommend

Baidu Marketing 618 Delivery Guide and Plan

618 is coming and the battle is about to begin. H...

He discovered the Mayan "Lost City" in the rainforest, relying on a free map?

Archaeologist Takeshi Inomata from the University...

Customer acquisition methodology: How to attract new users at a lower cost?

No matter it is an emerging industry like bike-sh...

Select pictures by imitating WeChat Moments

Source code introduction: This version implements...

It's time to make some changes. Three questions on the smart superbike

The fire of smart hardware has finally spread to ...

Pickup Master TV: Chen Dali's "Live Game Master 2.0" full version

This course is "Live Game Master 2.0" b...