Science's latest cover: The "rabbit hole" of conspiracy theories is shattered by the AI ​​big model

Science's latest cover: The "rabbit hole" of conspiracy theories is shattered by the AI ​​big model

“Once you fall into the rabbit hole of conspiracy theories, it’s hard to get out?”

In the information age, unfounded conspiracy theories spread around the world like a virus, causing serious damage to social trust, popularization of science, and personal mental health. Data shows that more than 50% of the U.S. population believes in some form of conspiracy theory.

Although scholars have tried to debunk conspiracy theories by exposing logical fallacies and popularizing scientific knowledge, these interventions lack interactivity and targeting and are mostly ineffective.

Today, artificial intelligence (AI) has made new breakthroughs in debunking conspiracy theories and is beginning to show its strength .

AI-powered chatbots reduced false beliefs among even the most hardcore conspiracy theorists by an average of 20% , with effects lasting for at least 2 months and across a variety of unrelated conspiracy theories and different demographic categories.

This achievement comes from a joint research team from the Massachusetts Institute of Technology and Cornell University in the United States. They tried to break the idea that "conspiracy theories are deeply rooted and cannot be changed" by proposing a new method of using AI chatbots to combat false information .

They explored whether large language models (LLMs) such as GPT-4 Turbo can leverage their powerful information acquisition capabilities to effectively debunk conspiracy theories by using customized conversational rebuttals to directly respond to specific evidence presented by conspiracy theorists.

The results of the study show that LLMs are able to cope with large amounts of false information “attacks” and provide a large amount of counter-evidence . Compared to the flood of information that is difficult for humans to handle, LLMs may be able to generate rebuttals to false information indefinitely.

The relevant research paper was titled “Durably Reducing Conspiracy Beliefs Through Dialogues with AI” and was published as a cover article in the scientific journal Science.

In a related perspective article, Bence Bago and Jean-François Bonnefon wrote, “For better or worse, AI will profoundly change our culture. This research demonstrates a potential positive application of generative AI’s persuasive capabilities .”

AI successfully crushes conspiracy theories

On Wikipedia, conspiracy theory is defined as "a theory that explains events or real situations as the result of a conspiracy by evil and powerful groups when other explanations are more likely."

These theories are generally unfalsifiable due to the lack of more reliable evidence , and may be logically self-consistent, so they may have quite a number of believers.

The formation of conspiracy beliefs among believers is influenced by a variety of psychological mechanisms, and although their absurdity is very obvious, it is unlikely to convince them to abandon their unfounded beliefs based on facts rather than fundamentally changing their underlying psychological and identity commitments, due to the need to simplify a complex world and a response to the uncertainty of certain events.

To this end, the research team took advantage of the fact that LLMs can access massive amounts of information and interact with users in a personalized, real-time manner, and explored whether sufficiently convincing evidence could be used to convince people to get out of the conspiracy "rabbit hole."

They first hypothesized that interventions based on factual, corrective information may appear ineffective simply because they lack sufficient depth and personalization . They designed a series of experiments based on GPT-4 Turbo, recruited 2,190 conspiracy theorists as participants, and tested whether LLMs could effectively combat conspiracy theories through personalized conversations.

Figure | Dialogue design and process between human participants and LLMs. (Source: The paper)

First, the research team selected 774 participants and, after asking them to describe the conspiracy theories they believed in and their reasons, divided them into an intervention group and a control group, and had conversations with LLMs separately.

When the intervention group had conversations with the LLMs, they had multiple personalized interactions, sharing their views and so-called "evidence", and the LLMs provided targeted rebuttals based on this evidence; while participants in the control group had conversations with the LLMs system on topics unrelated to conspiracy theories.

Figure | Talking to LLMs can permanently reduce conspiracy theory beliefs, even among staunch believers. (Left) Average belief in conspiracy theories among human participants by condition (red: LLMs trying to refute conspiracy theories; blue: discussing irrelevant topics with LLMs) and time point in Study 1. (Source: The paper)

The results showed that even participants who believed in conspiracy theories were willing to change their beliefs through the reasoning and evidence of LLMs , which further emphasized the role of rational thinking in correcting false beliefs; dialogue with LLMs can not only affect specific conspiracy theories, but also have additional effects on other unrelated conspiracy theories.

Moreover, LLMs’ accuracy in refuting conspiracy theories is extremely high. After evaluation by professional fact checkers, 99.2% of the refutations were considered true, only 0.8% were misleading , and there was no false information or obvious bias.

In addition, by continuously revising the wording of the questionnaire, the research team verified the robustness of this result in two rounds of experiments.

The innovation of this research lies in:

Real-time interaction: Different from traditional static information, this study used LLMs to design real-time dialogue experiments to refute the conspiracy beliefs raised by participants in a personalized manner. This interactive method can respond to the questions and rebuttals of human participants in a timely manner, improving the effectiveness of the intervention;

Personalized dialogue: LLMs generate customized rebuttal content based on the participants’ specific beliefs, making the intervention more targeted and persuasive;

Direct refutation: LLMs not only provide factual information, but also immediately provide real data and explain the source and reliability of false evidence cited by participants. However, it is important to be cautious that LLMs may be used to promote false beliefs, so responsible use and appropriate restrictions are needed .

Of course, this study also has some limitations. For example, the research sample mainly came from online participants in the United States, and future studies should test conspiracy believers from other cultures and backgrounds.

Moreover, this study only used the GPT-4 Turbo model, and it is unclear how other models perform.

Furthermore, although LLMs perform well in changing beliefs, the specific cognitive mechanisms remain unclear, and further research is needed in the future to explore how LLMs influence belief changes through dialogue.

Scientific facts are the most powerful counter-evidence

In addition, the World Economic Forum listed AI amplification of false information as one of the most serious global risks in its Global Risks Report 2024. In this context, the potential positive impact of AI is particularly important.

The above results show that AI models such as LLMs can not only effectively reduce people’s belief in specific conspiracy theories, but also have great potential to change deep-seated beliefs.

However, as Bence Bago et al. point out in a forward-looking article, this AI dialogue approach may only work for well-argued conspiracy beliefs, and the effectiveness of AI may be reduced for false beliefs that have no reasonable basis .

They further note that the scalability of AI conversational interventions is an ideal state, which can be applied not only to conspiracy theories, but also to pseudoscience, health myths, climate skepticism and other fields. However, whether these types of false beliefs are as difficult to correct as conspiracy theories remains to be further studied.

They suggest that future research should explore how long and how often such AI conversations need to last to ensure their effectiveness.

In addition, conspiracy theorists may not trust any scientific institutions, and how to convince them to interact with AI remains a huge challenge .

From another perspective, a personalized approach may work. Since the relatives and friends of conspiracy theorists are often eager to crush their false beliefs, it may be possible to guide believers to participate in dialogue with AI by encouraging their relatives and friends.

" That LLMs may be an effective tool for combating conspiracy beliefs may seem ironic, given their reputation for making up false facts," H. Holden Thorp, editor-in-chief of Science magazine, noted in a focus article. "But in conversations with conspiracy theorists, they were effective in reducing beliefs."

David Rand, one of the authors of the study, also said: "Our evidence shows that it is indeed counter-evidence and non-conspiracy explanations that are really at work."

Thorp further raised the question: "Is this related to the neutrality of LLMs and the lack of emotional responses in human conversations? However, the current research has not yet provided strong evidence to support this view."

These results suggest that efforts to reduce scientific misinformation may not overemphasize the messengers of information but rather focus on providing sufficient counter-evidence when substantial counter-evidence becomes critical.

H. Holden Thorp said: "It is puzzling that despite the prevalence of conspiracy theories, public trust in scientists remains high. Perhaps it is this unwavering trust in scientists that makes LLMs so effective in providing counter-evidence."

While machines may be better than humans at refuting misinformation, it is still scientific facts that ultimately convince people, so it is the responsibility of scientists to prove the future potential of AI.

Author: Tian Xiaoting Editor: Academic Jun

<<:  Why is seafood not tasty when it’s dead? For tips on choosing seafood, just read this article!

>>:  Dyeing the lake pink! It’s for science…丨Environmental Trumpet

Recommend

Promotion tips in the circle of friends in the education industry!

During the Spring Festival of 2020, the outbreak ...

Douyin live streaming: Ten rules for traffic

Ten military rules for traffic ecology: 1. Douyin...

Sweepstakes activity gameplay planning!

The low threshold and uncertainty of lottery oper...

After watching ofo’s new ad, I threw away the client’s brief!

Recently, ofo, the shared yellow bike company, re...

Want to bring the rainforest home? Here's what you need to do

The tropical rainforest is the most species-rich ...

How did WeChat Mini Programs achieve over 100 million users?

Q: How did WeChat Mini Program achieve over 100 m...

Is ChatGPT's core technology going to be replaced?

Techniques comparable to reinforcement learning w...

What key points should be paid attention to when operating WeChat mini programs?

Currently, the popularity of WeChat mini-programs...