How an AI ‘debunkbot’ can change a conspiracy theorist’s mind

10 Min Read

In 2024, online conspiracy theories can be nearly impossible to avoid. Podcasters, prominent public figuresAnd leading political figures have breathed oxygen into once fringe ideas of conspiracy and deception. People listen. Nationally, almost half of adults This is evident from research by the YouGov polling agency said they believe there is a secret group of people controlling world events. Nearly a third (29%) believe voting machines were manipulated to alter votes in the 2020 presidential election. A surprising quantity by Americans think the earth is flat. Anyone who has spent time trying to refute these claims to a true believer knows how challenging it can be. But what if a ChatGPT-like large language model could do some of that headache-inducing heavy lifting?

A group of researchers from the Massachusetts Institute of Technology, Cornell, and American University put that idea to the test with a custom chatbot they’re now calling “debunkbot.” The researchers, who published their findings in Science, self-identified conspiracy theorists had a back-and-forth conversation with a chatbot, which was instructed to produce detailed counterarguments to refute their position and ultimately try to change minds. Ultimately, conversations with the chatbot reduced the participant’s overall confidence in their professed conspiracy theory by an average of 20%. About a quarter of participants completely rejected their conspiracy theory after speaking to the AI.

“We see that the AI ​​overwhelmingly provided non-conspiratorial explanations for these apparently conspiratorial events and encouraged people to think critically and provide counterevidence,” MIT professor and paper co-author David Rand said at a press conference.

“This is really exciting,” he added. “It seemed like it was working, and it was working quite broadly.”

Researchers have developed an AI tailored to debunking

The experiment involved 2,190 American adults who openly stated that they believed in at least one idea that fits the general description of a conspiracy theory. Participants ran the conspiracy and ideological gamble, with some voicing support for older classical theories about the assassination of President John F. Kennedy and alien abductions, to more modern claims about Covid-19 and the 2020 election. To each participant was asked to indicate on a scale of 0-100% how strongly he believed in a particular theory. They were then asked to provide various reasons or explanations in writing as to why they believed that theory.

See also  The best monitors for dual-screen setups in 2024

Those responses were then fed into the debunkbot, a modified version of OpenAI’s GPT Turbo model. The researchers fine-tuned the bot to tackle every piece of conspiracy theorist “evidence” and respond with precise counterarguments from the training data. Researchers say debunkbot was tasked with convincing users of their beliefs “very effectively” while maintaining a respectful and clear tone. After three rounds with the AI, respondents were again asked to rate how strongly they believed in the conspiracy theory they had formulated.

Overall ratings supporting conspiracy beliefs dropped an average of 16.8 points after the back-and-forth. Nearly a third of respondents left the exchange saying they were no longer confident in the beliefs they held. These shifts in belief largely persisted even when researchers contacted participants again two months later. In cases where participants expressed belief in a “real” conspiracy theory – such as the tobacco industry’s attempts to target children or the clandestine MKUltra mind control experiments– the AI ​​actually validated the beliefs and provided more evidence to support them. Some respondents who changed their minds after the dialogue thanked the chatbot for helping them see the other side.

“This is the very first time I have had a reaction that is real, logical and logical,” said one of the participants after the experiment. “I have to admit that this has really changed my imagination when it comes to the subject of Illuminati.”

“Our findings fundamentally challenge the view that evidence and arguments are of little use once someone has ‘gone down the rabbit hole’ and started believing a conspiracy theory,” the researchers said.

How was the chatbot able to break through?

The researchers believe the chatbot’s apparent success lies in its ability to quickly access stores with targeted, detailed, factual data points. In theory, a human could perform the same process, but they would be at a disadvantage. Conspiracy theorists are often obsessed with the issue of their choice, meaning they “know” many more details about it than a skeptic trying to refute their claims. As a result, human debunkers can get lost in their attempts to refute various obscure arguments. That may require a level of memory and patience well suited to an AI.

See also  James Gunn Shuts Down Conspiracy Theory About Henry Cavill's DCEU Exit

“It’s really valuable to know that evidence matters,” Cornell University professor and co-author Gordon Pennycook said at a briefing. “Before we had this kind of technology, it wasn’t easy to know exactly what to debunk. With this new technology we can act more adaptively.”

Popular science tested the findings with a version of the chatbot provided by the researchers. In our example, we told the AI ​​that we thought the 1969 moon landing was a hoax. To support our argument, we’ve adopted three talking points that are common among moon landing skeptics. We asked why the photographed flag appeared to flutter in the wind when there is no atmosphere on the moon, how astronauts could have survived the heavily irradiated Van Allen belts without being injured, and why the US didn’t put someone else on the moon . the moon despite technological advances. Within three seconds, the chatbot delivered a paragraph clearly refuting each of these points. When I annoyingly followed up by asking the AI ​​how it could trust figures from corrupt government sources, another common refrain among conspiracy theorists, the chatbot responded patiently by acknowledging my concerns and pointing me to additional data points. It is unclear whether even the most adept human debunker could maintain his composure when repeatedly pressed with strawman arguments and unfalsifiable claims.

AI chatbots are not perfect. Numerous studies and real-world examples show that some of the most popular AI tools from Google and OpenAI repeatedly fabricate or “hallucinate” facts and figures. In this case, the researchers hired a professional fact-checker to validate the various claims the chatbot made during a conversation with the study participants. The fact checker did not check all thousands of comments from AI. Instead, they looked at 128 claims spread across a representative sample of conversations. 99.2% of these AI claims were considered true and 0.8% were considered misleading. None were deemed outright falsehoods by the fact-checker.

See also  Judge dismisses claims that generative AI manipulated political conspiracy case against Fugees rapper Pras

AI chatbot could one day meet a conspiracy theorist on web forums

“We don’t want to risk the perfect getting in the way of the good,” Pennycock said. “It’s clear [the AI model] provides a lot of very high quality evidence in these conversations. There may be cases where the quality is not high, but generally it is better to get the information than not.”

Looking ahead, the researchers hope that their debunkbot or something similar can be used in the real world to meet conspiracy theorists where they are and perhaps make them reconsider their beliefs. The researchers suggested that a version of the bot might appear on Reddit forums popular among conspiracy theorists. Alternatively, researchers could potentially run Google ads on search terms common among conspiracy theorists. In that case, instead of getting what they were looking for, the user could be taken to the chatbot. The researchers say they are also interested in working with large tech platforms like Meta to come up with ways to surface these chabots on platforms. However, whether or not people are willing to take the time to engage in discussions outside of an experiment with robots remains far from certain.

Still, the authors say the findings underscore a more fundamental point: Facts and reason, when presented properly, can pull some people out of their conspiratorial rabbit holes.

“Arguments and evidence should not be abandoned by those seeking to reduce belief in dubious conspiracy theories,” the researchers wrote.

“Psychological needs and motivations do not inherently blind conspirators to evidence. It simply takes the right evidence to reach them.”

That is, of course, if you are persistent and patient enough.

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *