Can machines think? Can artificial intelligence be as smart as humans? A new study suggests that artificial intelligence may be able to do it. In a non-verbal Turing test, a research team led by Professor Agnieszka Wykowska of the Italian Institute of Technology found that behavioral variability can blur the distinction between humans and machines, that is, it can help robots look more like humans. Specifically, their AI program passed the non-verbal Turing test by simulating the behavioral variability of human reaction times while playing a game involving shape and color matching with human teammates. The related research paper, titled "Human-like behavioral variability blurs the distinction between a human and a machine in a nonverbal Turing test", has been published in the scientific journal Science Robotics. The research team said that this work can provide guidance for future robot design, giving robots human-like behaviors that humans can perceive. Regarding this study, Tom Ziemke, professor of cognitive systems at Linköping University, and Sam Thellman, a postdoctoral researcher, believe that the research results "blur the distinction between humans and machines" and make a very valuable contribution to people's scientific understanding of human social cognition. However, " human similarity is not necessarily the ideal goal for the development of artificial intelligence and robotics. Making artificial intelligence less human-like may be a wiser approach." Pass the Turing Test In 1950, Alan Turing, the "father of computer science and artificial intelligence," proposed a test method to determine whether a machine is intelligent, namely the Turing test. The key idea of the Turing test is that complex questions about the possibility of machine thinking and intelligence can be verified by testing whether humans can tell whether they are interacting with another person or a machine. Today, the Turing test is used by scientists to evaluate what behavioral characteristics should be achieved in an artificial intelligence entity to make it impossible for humans to distinguish between computer programs and human behavior. Artificial intelligence pioneer Herbert Simon once said: "If programs exhibit behaviors similar to those exhibited by humans, we call them intelligent." Similarly, Elaine Rich defines artificial intelligence as "the study of how to make computers do things that people currently do better." The non-verbal Turing test is a form of the Turing test. It is not easy for artificial intelligence to pass the non-verbal Turing test because they are not as skilled as humans in detecting and distinguishing the subtle behavioral characteristics of other people (objects). So, could a humanoid robot pass the non-verbal Turing test and embody human characteristics in its physical behavior? In a non-verbal Turing test, the team tried to see if they could program an AI to vary its reaction time within a range similar to human behavioral variation so that it could be perceived as human. To do this, they placed humans and robots in a room with different colors and shapes on the screen. Figure|Robots and humans perform tasks together. (Source: The paper) When the shape or color changed, the participants pressed a button, and the robot responded to this signal by clicking the opposite color or shape displayed on the screen. Figure | Responding by pressing a button (Source: The paper) During testing, the robots were sometimes controlled remotely by humans and sometimes by artificial intelligence trained to mimic behavioral variability. Figure | Participants were asked to judge whether the robot's behavior was pre-programmed or controlled by humans (Source: The paper) The results showed that participants could easily tell when the robot was being operated by another person. However, when the robot was operated by artificial intelligence, the participants guessed wrong more than 50% of the time. Figure | Average accuracy of the Turing test. (Source: the paper) This means that their artificial intelligence has passed the non-verbal Turing test. However, the researchers also said that variability in human-like behavior may only be a necessary but not sufficient condition for passing the non-verbal Turing test for embodied AI, as it can also be exhibited in human environments. Does artificial intelligence need to be like humans? Human-likeness has long been a goal and metric in AI research, and Wykowska's team's research shows that behavioral variability might be used to make robots more human-like. Ziemke et al., however, believe that making AI less human-like might be a wiser approach, and cite self-driving cars and chatbots as examples. For example, when you are on the road and about to cross a crosswalk, you see a car approaching you. From a distance, you may not be able to tell whether it is an autonomous car, so you can only judge based on the car's behavior. (Source: Pixabay) But even if you see someone behind the wheel, you can’t be sure if that person is actively controlling the vehicle or just monitoring its operation. "This has a very important impact on traffic safety. If an autonomous vehicle cannot indicate to others whether it is in autonomous mode, it may lead to unsafe human-machine interactions." Some people may say that ideally, you don’t need to know whether a car is self-driving or not, because in the long run, self-driving cars may be better at driving than humans. However, for now, people’s trust in self-driving cars is far from enough. Chatbots, on the other hand, are closer to the real-world scenario of Turing's original test. Many companies use chatbots in their online customer service, where the conversation topics and interaction methods are relatively limited. In this case, chatbots are often more or less indistinguishable from humans. (Source: Pixabay) So, the question is, should companies tell customers about the non-human identity of chatbots? Once told, it often leads to negative reactions from consumers, such as a decrease in trust. As the above cases illustrate, while human-like behavior may be an impressive achievement from an engineering perspective, the indistinguishability of humans and machines raises obvious psychological, ethical, and legal issues. On the one hand, people who interact with these systems must know the nature of the content they are interacting with to avoid deception. Taking chatbots as an example, California has enacted a chatbot information disclosure law since 2018, clearly stating that disclosure is a strict requirement. On the other hand, there are even more indistinguishable examples than chatbots and human customer service. For example, when it comes to autonomous driving, the interactions between autonomous vehicles and other road users don’t have the same clear start and end points, they’re usually not one-to-one, and they have certain real-time constraints. The question, therefore, is when and how the identity and capabilities of an autonomous vehicle should be communicated. Furthermore, fully automated cars are likely still decades away, so a mix of traffic and varying degrees of partial automation is likely to be the norm for the foreseeable future. There has been a lot of research into what kind of external interfaces autonomous cars might need to communicate with people, but there is less understanding of the complexity that vulnerable road users such as children and people with disabilities are actually able and willing to deal with. Therefore, the general rule above, that “people interacting with such a system must be informed of the nature of the objects of interaction,” may only be possible to follow in more explicit circumstances. Likewise, this ambivalence is reflected in discussions of social robotics research: given the human tendency to anthropomorphize mental states and attribute human-like properties to them, many researchers aim to make robots more human-like in appearance and behavior, so that they can interact in more or less human-like ways. However, others argue that robots should be easily identifiable as machines to avoid overly anthropomorphic attributes and unrealistic expectations. “So a more sensible approach might be to use these findings to make robots less human-like.” In the early days of artificial intelligence, imitating humans may have been a common goal in the industry, "but today when artificial intelligence has become a part of people's daily lives, we at least need to think about in what direction it would be truly meaningful to strive to achieve human-like artificial intelligence ." Reference Links: https://www.science.org/doi/10.1126/scirobotics.abo1241 https://www.science.org/doi/10.1126/scirobotics.add0641 |
>>: If the Amazon rainforest disappeared, would it affect the oxygen content on Earth?
1. First of all, you must determine the products ...
With excellent user interface and massive content ...
Last week, LeEco released a first-level organizat...
On May 10, 2022, the China Automobile Dealers Ass...
According to the latest research report released b...
Her name is Wei Xiaowei, a mother of two who deal...
For Huawei, the most noteworthy product at the mo...
In less than a month, I earned 20,000 yuan a mont...
Q: How much does it cost to develop a Baidu Mini ...
Brand promotion is not just a high-sounding conce...
© Aging Wisely Blog Leviathan Press: In the movie...
[[441500]] Baidu Netdisk Youth Edition is here! I...
Reviewer: Qu Bo Chief Physician of General Surger...
[[138348]] Microsoft has announced that the offic...