How do you change someone’s mind about something false that they believe is true? Is it possible?
My friend and change psychologist, Owen Fitzpatrick, recently recommended the book How Minds Change by David McRaney, where one of the central stories of the book is all about someone who changed his mind about 9/11 conspiracy theories, even after being a prominent YouTuber on the subject. The book is all about how people make up and change their minds and, yes, how other people can (or can’t!) contribute to that process.
While the book presents all sorts of science-backed strategies for changing people’s minds, the challenge at the center of my own research remains: How can we make that process of changing minds faster? Many of the strategies we know work for changing people’s minds—motivational interviewing, deep canvassing, street epistemology, transformative learning, etc.—depend on two-way conversations that, even at their fastest, still take at least 20 minutes to move the needle on someone’s beliefs. Many of those approaches, like those used in counseling and therapy, can take months.
What’s more, the way we most often try to change people’s minds (with mounds and mounds of data), usually doesn’t work either, not in the least because people find ways to interpret that data so that it supports their beliefs, instead.
That’s all pretty depressing when you remember that, whether you’re talking from a persuasive standpoint or a sales standpoint, you’re often talking to people who have deeply, deeply held beliefs.
All of which makes a study from last fall on how people reduced their conspiracy beliefs after talking with an AI chatbot particularly interesting. The study seems to suggest that people can and will change their when presented with enough evidence that goes against what they previously believed. That is, they can and will if that data and is personalized and tailored to both their own particular brand of conspiracy theory and to the evidence they’re using to support that particular belief.
Unfortunately, humans can’t possibly retain enough information (plus links!) to counter every objection and piece of evidence of every person they meet.
But AI can, and in this study, that’s what it did. The researchers found that, overall, using AI to tailor and personalize an anti-conspiracy intervention seemed to reduce a person’s conspiracy belief by 20 percent. What’s particularly exciting: the effect remained in place two months later, even across a wide range of conspiracy theories and even among participants with deeply entrenched beliefs.
What could all of this mean for you and your own message design efforts?
One (and somewhat surprisingly, at least to me): the go-to tool of using evidence to change people’s minds can actually work, but probably not for you, my fellow human. Only the large language models behind AI agents can hold and process the necessary information to counter all the different collections of data individual conspiracy theorists (or your stakeholders or customers) use to support their beliefs.
This leads me to the second possible takeaway here: You could use an AI to anticipate the most common or expected objections someone might raise to your message and help supply the go-to data to counter it. That is, of course, after you’ve developed a solid belief-based case to present first. 😉
What surprises you most from this study and its findings?
How will you take what you’ve learned and apply it to your work?
Comment and let me know!