Generative AI, like the one that powers ChatGPT, has pushed AI algorithms and data crunching from behind the scenes to front and center in our digital experiences. Kids today use it every day, whether it’s helping them with homework or helping them decide what to wear. The rapid application of artificial intelligence has sparked discussions in society about its broader impact. While generative artificial intelligence brings benefits, it also faces long-standing challenges of artificial intelligence technology, such as algorithmic bias and algorithmic black box problems. It may even amplify problems such as unpredictable outputs and even bring new problems. Children and teenagers spend the most time online, and given the speed at which AI is developing and becoming more prevalent, it is important to understand the impact of AI on children. How many children have used generative AI? Generative AI is a subfield of AI machine learning that learns from massive amounts of data, discovers specific patterns and generates new, similar data. It is often used to generate content that mimics human creation, whether it is text, images or computer code, but it can also complete complex planning tasks, such as supporting the development of new drugs, and improving the performance of robots in unprecedented ways. We don’t know how many children are using generative AI, but preliminary surveys suggest that more children use it than adults. A small US poll showed that while only 30% of parents had used ChatGPT, 58% of their children aged 12-18 had used it, and had kept it a secret from their parents and teachers. In another US survey, young adults who knew about ChatGPT said they were using it more than older adults. AI is already part of children’s lives as recommendation algorithms or automated decision-making systems, and the industry’s enthusiasm for generative AI suggests that it may soon become a feature of children’s digital environment. AI will be embedded in children’s digital lives in a variety of ways, such as as digital and personal assistants and search engine assistants. Platforms popular with children, such as Snapchat, already have integrated AI chatbots, while Meta plans to add AI assistants to its family of products, including Instagram and WhatsApp, which have more than 3 billion daily active users. How Generative AI Can Benefit Children Generative AI brings potential opportunities such as homework help, easy-to-understand explanations of difficult concepts, and personalized learning experiences that adapt to children's learning style and speed. Children can use AI to create artwork, compose music, write stories and software (with no or low coding skills), thus nurturing children's creativity. Children with disabilities can also interact with digital systems in new ways through text, voice or images to co-create. When children use these systems directly, generative AI can help them detect health and developmental issues early. Indirectly, generative AI systems can provide insights into medical data to support advances in the healthcare industry. More broadly, the analytical and generative capabilities of generative AI can be applied across industries to improve efficiency and develop innovative solutions that can have a positive impact on children. The risks of generative AI for children However, generative AI could also be used by those with ulterior motives to inadvertently cause harm or societal disruption at the expense of children’s development prospects and well-being. Generative AI has been shown to create text-based false information on the fly. They are indistinguishable from human-generated content and can even be more convincing than human-generated content. AI-generated images are indistinguishable from real faces and, in some cases, are perceived as more believable than real faces (see Figure 1). These AI capabilities can help fraudsters expand their operations and reduce costs. Children, whose cognitive abilities are still developing, are particularly vulnerable to false/misinformation. Figure 1: A mixture of real and generated faces - AI-generated faces are indistinguishable from real faces. (The left column is real faces, the right column is synthesized faces) Image credit: Sophie J. Nightingale and Hany Farid, 2021 Long-term use of generative AI could raise some questions for children. For example, given that chatbots have human-like tones, what impact would interacting with AI systems have on children’s development? Early research suggests that children’s perceptions and traits in terms of intelligence, cognitive development, and social behavior could be affected. In addition, many AI systems have inherent biases, how might these biases affect children’s worldview? Experts warn that more rigorous testing is needed to ensure chatbots are safe for children. So, what does this mean for children’s privacy and data protection when they interact with generative AI systems and share their personal data in conversations and interactions? The Australian eSafety Commissioner believes that in this context, more consideration needs to be given to the collection, use and storage of children’s data, especially for commercial purposes. Long-term effects The possible outcomes of generative AI present both opportunities and risks. For example, we are not yet sure how generative AI will disrupt children’s future working lives. It may replace some key jobs while creating new ones. How and what kind of education we provide to children today will be closely related to this. Although the opportunities and risks of AI are far more than these, the above examples can already reveal its wide-ranging impact. As children will be dealing with AI systems throughout their lives, their interactions with AI during their formative years may have lasting effects, which requires policymakers, regulators, AI developers and other stakeholders to be more forward-looking. The need for action As a starting point, existing AI resources offer a lot of direction for responsible AI development. For example, UNICEF’s AI for Children: A Policy Guide sets out nine requirements for upholding children’s rights in AI policy and practice, while the World Economic Forum’s AI for Children toolkit provides advice for tech companies and parents. However, generative AI is developing at a rapid pace, and we may need to interpret existing policies in the context of new technologies, and new guidance and regulations may be needed. Policymakers, technology companies, and others working to protect children and future generations need to act urgently. They should support research into the impacts of AI generation and engage (with children) in forecasting efforts to better anticipate governance. There needs to be greater transparency around AI, responsible technology development by AI vendors, and advocacy for children’s rights. Global regulation of AI requires the full support of governments, as UN Secretary-General António Guterres has called for. Source: World Economic Forum |
<<: Samsung GALAXY Alpha review: metal frame feels great
Cold is a general term for a class of diseases wi...
Silicon, the chemical element with the second hig...
In January 2025, a research team from the Institu...
The latest news shows that Microsoft has prepared...
Shortly after the release of the iPhone 4S in 2011...
The popularity of group buying probably started i...
From 2017 to 2019, China's Internet entered a...
1. Editing tools (there are many tools, and I rec...
Since the rise of public accounts, many brands an...
The core of building a private domain for educati...
Chapter 1 The official website has not been updat...
Recently, Lancôme's Spring Festival Garden Pa...
While promoting information flow, the effect sudd...
Fields Medalist Martin Herrell discusses his life...
Qiu Ma is here again with many years of operating...