Scientists who rarely explore human consciousness have begun discussing "AI consciousness." Source: AI Technology Review Author | Antonio Editor | Chen Caixian There is no doubt that humans have their own consciousness. In a sense, this "consciousness" can even be regarded as one of the connotations of human intelligence. With the in-depth development of "Artificial Intelligence", "Can AI have consciousness?" has gradually become a question in the minds of scientists, and "consciousness" is also regarded as one of the criteria for measuring whether AI is intelligent. For example, in mid-February, OpenAI's chief scientist Ilya Sutskever initiated a discussion on AI consciousness on Twitter. At the time, he said: Today's large neural networks may already have some semblance of consciousness. His views immediately sparked discussions among AI experts. In response to IIya Sutskever's views, Turing Award winner and Meta AI chief scientist Yann LeCun was the first to raise objections, giving a straightforward point of view: "Nope." Judea Pearl also supported LeCun, saying that existing deep neural networks are still unable to "deeply understand" certain fields. After several rounds of verbal battle, Judea Pearl said: ...In fact, we don't even have a formal definition of "consciousness". The only thing we can do is to ask philosophers who have studied consciousness throughout the ages... This is a question about the origin. If we need to discuss "AI consciousness", then: What is "consciousness"? What does it mean to have "consciousness"? To answer these questions, computer knowledge alone is far from enough. In fact, the discussion about "consciousness" can be traced back to the "Axial Age" in ancient Greece. Since then, "consciousness" as the essence of human epistemology has become an unavoidable topic for subsequent philosophers. After the discussion about AI consciousness arose, Amanda Askell, a scholar who was a research scientist at OpenAI, also made some interesting insights on the topic. Caption: Amanda Askell, her research direction is the intersection of AI and philosophy In her latest blog post, “My mostly boring views about AI consciousness,” Askell explores “phenomenal consciousness” in the phenomenological sense, rather than “access consciousness.” Phenomenal consciousness emphasizes the subject's experience process, focusing on feelings, experience, and passive attention; while conscious consciousness emphasizes the subject's subjective initiative, and it emphasizes the subject's subjective active attention. For example, when you do homework under relaxing music, you can feel the music in the background (phenomenal consciousness), but you will not pay attention to its specific content; homework is subjective attention to you (conscious consciousness), and you really know what you are doing. This is a bit like the two different attention mechanisms commonly used in computer vision and cognitive science. Phenomenal consciousness corresponds to "bottom-up", while conscious awareness corresponds to "top-down". Caption: At a glance, you can notice the large print in the book as “phenomenal consciousness”; being aware of the details in it belongs to “conscious consciousness”. Askell agrees that higher intelligence is more relevant to conscious awareness, which can also effectively distinguish humans from other animals, but she is "more interested in the difference between tigers and rocks than the difference between humans and tigers," and phenomenal consciousness is sufficient to make such a distinction. And she believes that if "phenomenal consciousness" appears, it means that some moral and ethical issues will also arise. This is why she thinks the study of consciousness is important. 1 Are current AI systems conscious? Askell makes an interesting observation: Today’s AI systems are more likely to be phenomenally conscious than a chair, but nowhere near as conscious as a mouse, or even as conscious as an insect, fish, or bivalves. She roughly likens AI systems to the regions of a plant—because plants behave in ways that seem to require planning, and can do things that seem to require internal and external communication, AI systems seem to behave in a similar way. However, she is also convinced that AI systems as a whole will have greater potential for consciousness than plants or bivalves in the future. In particular, future AI research with more biologically inspired neural networks may produce more architectures, behaviors, and cognitive systems related to consciousness. Studies have shown that plants also have consciousness and intelligence, they can also feel pain, and communicate and interact well with the environment. So when considering whether AI is conscious or not, what aspects of evidence should be considered? Askell lists four types of evidence: architecture, behavior, function, and theory. Architectural evidence refers to the degree to which a system's physical structure resembles that of a human, e.g. the structure of a brain is far more conscious than that of a finger. Behavioral evidence is when an entity performs actions related to consciousness, cognition, etc., such as being aware of the surrounding environment, responding to external stimuli, or more complex behaviors such as speech and reasoning. Functional evidence considers its goals and how those goals relate to the environment. For example, a table or chair has no real evolutionary pressure from its environment, so it has no reason to develop the same kind of awareness of its environment that a mouse has. Theoretical evidence includes the coherence and persuasiveness of the theory itself. Philosophers who study the mind now have two theoretical tendencies: one is the inclusive school, such as the panpsychists who believe that atoms can have consciousness; the other is the mechanistic school, who deny that non-human entities have consciousness. But no matter which tendency, the consciousness of AI can be discussed from the four different evidences mentioned above. 2 Does it matter whether AI is conscious? Most AI practitioners do not take consciousness into consideration, and AI and consciousness seem to exist only in the imagination of the future in some science fiction movies. However, in terms of safety, ethics, bias and fairness, the combination of consciousness and AI has attracted more and more attention in academia and industry. Askell believes that AI has phenomenal consciousness, which means it is very likely to develop ethical values, which has a lot to do with its creators. In particular, when AI makes mistakes or is "abused", its creators should bear certain responsibilities. Askell discussed two important concepts in moral ethics: moral agents and moral patients. Moral agents are entities that have the ability to distinguish between good and evil and can bear the consequences, such as adults, while moral patients are entities that cannot distinguish between good and evil, that is, entities that cannot be morally restrained and generally do not bear the consequences, such as animals or young babies. Objects of moral concern Askell believes that once an entity has perceptions such as pleasure and pain (sentisent), it is likely to become an object of moral concern. If an object of moral concern (such as a cat) is found to be suffering, it is unreasonable for ordinary people not to try to fulfill their moral obligations to alleviate its suffering. She also believes that phenomenal consciousness is a necessary condition for sentience, and therefore, further, phenomenal consciousness is a prerequisite for becoming an object of moral concern. Possible arguments are whether certain groups have moral status, or whether they have a higher moral status. Moral status comes from ethics, and refers to whether a group can be morally discussed for their faults. For example, most organisms have moral status, while inanimate objects do not. Overemphasizing that a particular group has this status seems to imply that this group is more important and other groups are less important. This is just as worrying as the argument that animals, insects, fetuses, the environment, etc. are given more moral status. Askell points out that helping one group doesn’t need to come at the expense of others. For example, eating a vegetarian diet is good for both animal and human health. “Groups don’t usually compete for the same resources, and we can often use different resources to help both groups rather than forcing a trade-off between them. If we want to increase resources for global poverty alleviation, taking existing donations away from philanthropy isn’t the only option — we can also encourage more people to donate and give.” So, when future sentient AI systems become moral entities, it does not mean that we no longer care about the well-being of other humans, nor does it mean that we need to divert existing resources to help them. Moral Agent Moral actors know what is good and what is evil, so they tend to act in a good way and avoid acting in a bad way. When they do something that is not allowed by morality or law, they will be punished accordingly. The weakest part of a moral agent is simply responsive to positive and negative incentives. That is, another entity can punish the agent for bad behavior or reward it for good behavior because this will improve the agent's future behavior. It is worth noting that Askell pointed out that receiving stimuli and getting feedback does not seem to require phenomenal consciousness. Current ML systems already conform to this law in some sense, such as the need for models to reduce loss functions, or the more obvious "rewards" and "punishments" in reinforcement learning. Figure 1: Reward feedback mechanism for reinforcement learning What about stronger moral agents? We usually think that agents can only be morally responsible for their actions if they have the ability to understand right and wrong and are not fooled into taking other actions. For example, if a person persuades his friend to set a forest fire, if the friend is caught, no matter how he argues that he was instigated by others to set the fire, the one who bears moral responsibility is the person who caused the fire (the friend himself), not the person who persuaded him. However, if a person trains his dog to set fires, in this case, we will put most of the moral responsibility on the trainer rather than his pet. Why do we hold human arsonists morally responsible, but not trained dogs? First, human arsonists have the ability to consider their options and choose not to follow the advice of their friends, while dogs lack this ability to reason about their choices. Second, the dog never understands that its actions are wrong and never shows the intention (disposition) to do something wrong - it just does what it is trained to do. Assuming that an advanced machine learning system becomes a moral agent in this stronger sense—that is, it is fully capable of understanding right and wrong, fully considering viable options, and acting on its own will—does this mean that if a machine learning system does something wrong, those who created the system should be absolved of moral responsibility? Askell disagrees. To think more carefully about this issue, she suggests asking creators the following questions: What are the expected impacts of creating a particular entity (such as AI)? How much effort did the creators put into obtaining evidence of their impact? To what extent can they exercise control over the behavior of the entities they create (either directly influencing their behavior or indirectly influencing their intentions)? How much effort do they put into improving the entity’s behavior, to the extent that they can? Even if creators do everything they can to ensure that ML systems work well, they may still fail. Sometimes these failures are due to mistakes or negligence on the part of the creator. Askell believes that creating moral actors will certainly complicate things because moral actors are more difficult to predict than automata, such as the judgment of self-driving cars on the road. But this does not exempt creators from the obligation to be responsible for the safety issues of the AI systems they create. 3 How important is the work of studying AI consciousness? At present, there are very few studies in the field of AI specifically focusing on consciousness (or even other philosophical thinking), but some scholars have already conducted cross-disciplinary collaborative research on this topic. For example, after the advent of GPT-3, the blog Daily Nous, which focuses on philosophical issues, opened a special section to discuss the thinking of language philosophy on AI. At the same time, Askell stressed that the discussion on AI consciousness should not only stay at philosophical abstract speculation, but also be committed to developing relevant practical frameworks, such as establishing a series of efficient evaluations for machine consciousness and perception. There are already some methods that can be used to detect animal pain, and it seems that some inspiration can be obtained from there. On the other hand, the more we understand AI consciousness, the more we understand human beings. Therefore, although there is no consensus on AI consciousness, the discussion itself is a progress. We look forward to more research on AI consciousness. Reference Links: https://askellio.substack.com/p/ai-consciousness?s=r Source: Academic Headlines |
<<: Is there a second solar system in the universe?
>>: How are bridge piers in the sea built? Are they built first and then transported there?
Boiling hot soup, rice noodles that look particul...
In order to cope with the severe local epidemic, t...
What is the art of making money? There are three ...
The Earth we live on is experiencing one of the h...
It is said that mobile games have brought the sta...
A preliminary analysis was conducted on the overa...
The temple of Ta Prohm in Angkor Wat, Cambodia, b...
In June 2024, the retail sales volume of the pass...
Now that the Internet industry is becoming more a...
One sentence review: Apple, you are going to heav...
What is your impression of Li Jiaqi? ——The No. 1 ...
Speaking of the JAC Refine M6, it was released as...
When you mention highland barley, the first thing...
Recently, Google released AS 3.0 and a series of ...