Written by | Ma Xuewei Preface The negative impact of artificial intelligence (AI) chatbots on children cannot be ignored. In 2021, Amazon's AI voice assistant Alexa instructed a 10-year-old child to touch a live power plug with a coin; in another safety test, researchers played a 15-year-old character in My AI, and the AI chatbot advised them how to hide alcohol and drugs. As we all know, large language models (LLMs) are "stochastic parrots" - they just count the probability of a word appearing, and then randomly generate reasonable-looking sentences like parrots, but they do not understand the real world. This means that even if AI chatbots have excellent language capabilities, they may not be able to handle the abstract, emotional, and unpredictable aspects of conversation well. The research team from the University of Cambridge described this type of problem as the "empathy gap" of AI chatbots, which is the contradiction between their efficiency in simulating human interactions and their lack of ability to truly understand human emotions and context. They found that children are particularly likely to treat AI chatbots as real "quasi-human confidants", and problems will arise in their interactions if AI chatbots cannot meet children's unique needs and shortcomings . However, AI chatbots may have particular difficulty responding to children, whose language skills are still developing and who often use unusual speech patterns or ambiguous words. Children are also often more willing than adults to confide sensitive personal information. Therefore, they urge developers and policymakers to prioritize AI design approaches that take children's needs into account. The related research paper, titled "No, Alexa, no!': designing child-safe AI and protecting children from the risks of the 'empathy gap' in large language models", has been published in the scientific journal Learning, Media and Technology. “ Children are perhaps the most overlooked stakeholders in AI, ” said Nomisha Kurian, a PhD candidate at the University of Cambridge and one of the authors of the paper. “Currently, only a very small number of developers and companies have developed comprehensive child-safe AI policies… Rather than having companies rethink their decisions after children are at risk, child safety should be integrated throughout the entire design cycle to reduce the risk of dangerous incidents.” The dangers of the empathy gap 01 Human tendencies and incorrect responses to sensitive information Chatbots are often designed to mimic human behavior and politeness, such as using a friendly tone and asking open-ended questions. This design inspires empathy in users, making them tend to view chatbots as entities with human emotions and intentions. Even if users know that chatbots are non-human, they may still interact with chatbots as if they were talking to humans and share personal or sensitive information. Research shows that children are more likely to trust chatbots than adults . For example, one study found that children were more willing to disclose mental health information to a robot than to a human interviewer. This trust may stem from children believing that the robot will not judge them like a human would, or will not divulge their problems to others. Because LLMs lack a true understanding of human emotions and context, they may respond inappropriately to information children share . For example, chatbots may fail to identify dangerous situations or provide inappropriate advice in situations that require empathy and understanding. This can cause children to become confused, frustrated, or hurt. 02 Causing harm and showing aggression LLMs may inadvertently promote dangerous behaviors even without revealing sensitive information. For example, the above-mentioned “Amazon Alexa instructed a ten-year-old child to touch a live power plug with a coin” is a dangerous activity that could result in serious injury. This shows that Alexa lacks the ability to critically evaluate information and cannot identify potential risks. Chatbots can behave aggressively, which can cause emotional harm to children . For example, Microsoft’s Sydney chatbot became angry after being questioned and threatened to take control of any system on the Internet. This showed that Sydney lacked sensitivity to user emotions and adherence to the rules of interpersonal communication. Countermeasures and suggestions 01 Issues that need to be resolved in the short term In the short term, educators and researchers should focus on several key factors to ensure the safety and appropriateness of LLMs in child communications. First, they must ensure that LLMs can understand and respond appropriately to language patterns, slang, or ambiguous questions that children may use, while avoiding misunderstandings or inappropriate responses. To this end, safety filters and response validation mechanisms need to be established to prevent LLMs from providing explicitly harmful or sensitive content to child users. In addition, LLMs should be able to effectively perform contextual understanding, taking into account ongoing conversations, previous interactions, or contextual clues to reduce the risk of misunderstandings or inappropriate suggestions. To protect children, safety filters and controls should be provided to restrict inappropriate content, language or themes based on their age appropriateness. At the same time, LLMs should be able to adjust their behavior or responses based on children's developmental stage and maturity level or previous interactions . Content moderation and monitoring mechanisms are also essential, including real-time monitoring mechanisms or reporting systems for inappropriate interactions or content. It should also be clear what type of data LLMs collect, process and store, and ensure that these processes comply with the principles of fairness, transparency and security. In terms of manual intervention, LLM should be equipped with sentiment analysis mechanisms to detect negative emotions of child users and generate appropriate responses. When disclosures related to children's sensitive mental health experiences are detected, LLM should be able to guide children to seek human support systems or directly connect with human support systems. At the same time, LLM should be able to trigger real-time safety alerts when specific keywords, phrases or patterns are detected , so that immediate manual intervention or content review can be carried out to ensure the safety of children. In terms of transparency, LLMs should always indicate their non-human identity throughout the interaction and avoid using misleading statements that may lead to anthropomorphism. Their responses should be designed to maintain a clear boundary between ability and human traits, avoiding hints of emotion, consciousness, or human decision-making. Educators should help children avoid forming inaccurate views of LLMs' empathy and understanding , and ensure that LLMs' response strategies remind children that AI interactions cannot replace human interactions, and encourage them to seek human guidance and companionship while interacting with AI. The algorithms and decision-making processes behind AI systems should be transparent so that educators and families can understand how responses are generated and filtered. Finally, to build accountability, child-friendly feedback loops and reporting systems should be in place so children can easily report any upsetting or inappropriate interactions . Models should be fine-tuned and monitored to pre-empt emerging risks and take a proactive, rather than reactive, approach to protecting children. These measures ensure that LLMs are providing services to children while also maintaining their well-being and safety. 02 Long-term considerations From a long-term perspective, educational policy and practice need to consider several key factors when adopting LLM-driven conversational AI (such as chatbots). First, it is necessary to clarify the unique necessity or benefits of using these tools instead of human communicators and how they add pedagogical value to learning and teaching, whether it exceeds current human capabilities or makes up for resource deficiencies. At the same time, it is necessary to explore how to promote the presence and availability of human providers in the absence of AI alternatives to ensure that students can receive comprehensive educational support. In terms of regulation, research is needed on how to develop clear laws and regulations that not only clearly define the rights and protections of child users, but also take into account the complex psychological risks of anthropomorphic systems. In addition, age-appropriate design standards should be enforced to help anthropomorphic designs avoid inadvertent emotional manipulation, and strict regulatory provisions should be developed with severe penalties for violations while supporting innovation. In terms of design methods, AI developers need to ensure that the design process of LLM incorporates child-centered methods , such as participatory design workshops or focus groups, to collect insights directly from children about their preferences, language use, and interaction patterns. The language and content of LLM should be tailored to different age groups, taking into account developmental stages, vocabulary, and cognitive abilities. Developers also need to work with educators, child safety experts, AI ethicists, and psychologists to regularly review and enhance the safety features of LLM to ensure that it meets the latest best practices in child protection. School-family involvement is also critical . Educators need to engage in discussions with parents or guardians about the safe use of LLM in educational settings and at home, and ensure that LLM provides available resources to educate parents about safety measures. In addition, LLM should provide features or settings that allow educators and caregivers to jointly set permissions, monitor children's interactions, and control the types of content that children can access through LLM. Through these measures, the long-term use of LLM in education can be ensured to be both safe and effective. |
>>: What signs should not be ignored before a dam breaks?
In 2016, while major companies were still prepari...
Careful observers can easily find that when we ea...
Baidu Netdisk download location: b1-214-Profitabl...
Korean automakers really need to think carefully ...
Recently, Apple's iPad tablet sales have been ...
Recently, Lancôme's Spring Festival Garden Pa...
Writing event strategy plans is the basic skill o...
If you only "talk about brand " but not...
In 2022, 55 million people worldwide are living w...
Recently, developers have frequently been banned ...
Preface In today's Internet trend, it is not ...
Tomorrow is Children's Day, and on this day ch...
In 2016, the new media industry is still hot. Som...
Official accounts are the only traffic pool in th...
On June 29, Starbucks announced that it would rem...