On Sunday’s episode of The Excerpt podcast: This month, the Department of Justice shut down nearly 1,000 pro-Russia social media bots masquerading as American citizens. Their goal was to promote stories that showed Russia and President Vladimir Putin in a favorable light while sowing discord here in the U.S. With the proliferation of fake news on social media, is it possible to vaccinate ourselves against untruths and lies? What is the psychology behind persuasion and influence that makes people fall prey to fake news? Sander van der Linden, professor of social psychology at the University of Cambridge, joins The Excerpt to discuss what it takes to resist persuasion on social media.
Hit play on the player below to hear the podcast and follow along with the transcript beneath it. This transcript was automatically generated, and then edited for clarity in its current form. There may be some differences between the audio and the text.
Podcasts: True crime, in-depth interviews and more USA TODAY podcasts right here
Dana Taylor:
Hello and welcome to The Excerpt. I'm Dana Taylor. Today is Sunday, July 14th, 2024.
This month the Department of Justice shut down nearly 1,000 pro-Russia social media bots masquerading as American citizens. Their goal was to promote stories that showed Russia and President Vladimir Putin in a favorable light while sowing discord here in the US. Social media during the 2016 and 2020 elections was rife with fake news created by Russia, China, and Iran, big international players with a strong motivation to sway American perceptions and influence voters.
With a proliferation of fake news on social media, is it possible to vaccinate ourselves against untruths and lies? What is the psychology behind persuasion and influence that makes people fall prey to fake news? Our guest is Sander van der Linden, the author of Foolproof: Why Misinformation Infects Our Minds and How to Build Immunity. He's also professor of social psychology at the University of Cambridge. Thanks for being on The Excerpt, Sander.
Sander van der Linden:
Pleasure to be on the show. Thanks for inviting me.
Dana Taylor:
I mentioned the Russian social media bots that were recently taken down. Can you give a sense of the scale of what we're up against in the battle against bots and the spread of fake news?
Sander van der Linden:
Bots certainly form a big aspect of trying to sow doubt and confusion. Bots aren't necessarily very sophisticated in what they can do, but what they're good at is trying to troll people, which is a certain technique that aims to get people riled up emotionally, to try to fuel debates, to politicize debates. And so they're programmed to engage with people in rather nasty ways.
And bots can be detected and have different levels of sophistication, but some trolls are humans. So there are farms out there of humans creating fake accounts and trying to do the same thing. And humans are much better at evading detection because they come across as more organic. Now of course, with the assistance of generative AI, this whole process has become automated.
So whereas before Russia was deploying farms where humans were working from 9:00 to 5:00, they would go in the office, they would make fake accounts, and they would think, "What can we divide Americans on today? Let's post some comments under a YouTube video," that's labor-intensive. Now they can do these things automatically using large language models. They can produce hundreds of thousands of stories, accounts, posts that they weren't able to do before. And so that's powering a lot of the disinformation that we're currently seeing.
Dana Taylor:
What is fluency and why is it important in discerning what is real or true and what is not?
Sander van der Linden:
Fluency is a really key concept. We know from psychological research that people are more likely to think that a claim is true if it's been repeated, regardless of its veracity. And this occurs because the brain mistakes familiarity for truth. So the more often you repeat something, the more familiar it sounds, the more familiar it becomes. And we can actually measure this in terms of the speed with which people process claims. And so we literally process things we've heard before faster than claims that we haven't heard before. And that process is called fluency, so how fluent something is, and that is taken by the brain as a signal for truth.
And that's why lying repeatedly works as a strategy because the hundredth time that people hear it, it doesn't sound so crazy anymore. It's starting to sound more convincing because you've heard it before. It starts to seep into our memories. It starts to feel true. And we know from related research that over time, people start to think it's less immoral to share fake news the more they hear it.
Dana Taylor:
Other than fluency, what makes individuals so vulnerable to falling victim to fake news?
Sander van der Linden:
Part of the problem is that we're bombarded by information. And so people have limited cognitive resources to try to make decisions when we're online. We're seeing lots of different types of information. And so we know from research that if I stress you out with too many tasks, our brains resort to heuristics or rules of thumb to make decisions. And these aren't always in the service of accuracy. Do I like the person who's saying this? Do I trust a particular source? Does it resonate with what I already believe or want to be true? And those heuristics then take over in terms of leading people to accept a claim as true or false, and that can happen to anyone.
And then there's other theories that suggest that people actually have deep-seated social, political, spiritual, religious motivations to reject information that doesn't comport with what people want to be true about the world and information, particularly when it's divisive, when it bolsters the types of narratives that are favorable to one group and derogate another group. And that sort of plays into why people spread misinformation sometimes, not always because they personally believe it. But if it says something bad about the other side, then that's a good way to actually, what we call fostering group identities.
And then there's individual differences. We know that people who spend more time online, who get most of their news from social media, who are politically extreme, who are extremely low on trust, who are slightly paranoid, and who are what we call cognitively inflexible, so people who want simple explanations and aren't so comfortable with uncertainty and science, they're more likely to endorse misinformation.
Dana Taylor:
Are there specific ways in which lies are crafted to make them easier to spread and easier to believe? For example, are spoken words more powerful than still images?
Sander van der Linden:
There are in fact what we call fingerprints of misinformation. I talked to people who worked in Russian troll farms who try to dupe people professionally, and it turns out they use a particular playbook. This is what we call the six degrees of manipulation, but there's more, of course, than six, but these are prevalent.
Polarizing people, so creating an us versus them type of mentality, using conspiratorial reasoning, appealing to emotions rather than facts. We already talked about trolling, which is baiting people into an inflammatory response and sowing doubt and conflict, and discrediting, using denial and deflection to cast doubt about official statistics and science and mainstream narratives by alluding to alternative facts.
And this idea of discrediting, polarizations, the appeal to emotion fallacy, trolling, impersonation. So if you're thinking of how can I spot misinformation better, often what we do is try to attune people to some of these strategies. Is it polarizing? Is it emotionally manipulative? Is this a fake expert or an impersonation? Is it trying to discredit a particular story, not by using science or facts, but by casting doubt and using denial strategies? Or is this an example of trolling rather than genuine content and disagreement? And those things can help us spot misinformation.
Dana Taylor:
You worked with the Cabinet Office in the UK, the World Health Organization, and the UN on proving the theory that we can be vaccinated against falling for fake news. What did you discover regarding the COVID vaccine and the spread of misinformation?
Sander van der Linden:
So during the pandemic, there was a huge amount of misinformation, of course, that spread online, about the vaccine, about public health measures. There's been a lot of, of course, legitimate discussion as well about when science is uncertain and evolving, but we focused on these manipulation techniques, trying to manipulate people emotionally about the dangers of vaccination or conspiracy theories that you were going to be microchipped and things like that.
And we worked with the Cabinet Office and the World Health Organization at the time to try to develop what we referred to as a psychological vaccine in tandem with the biological vaccine. And the idea really was that we wanted to, at the start of the pandemic, preemptively expose people to weakened doses of the types of manipulation techniques that they might be facing in the future and refute them in advance, deconstruct them for people in advance, so that people can build up cognitive or mental antibodies over time and become more resistant to them in the future.
And so psychological inoculation works in the same way as medical inoculation that you expose people to a severely weakened or inactivated strain of the misinformation virus in this case, and then you show people why it's fallacious or why it's manipulative. And then you give people a few more examples so that they can practice it on their own, but then when it happens in the future to them in reality, people become more immune. And that's what we found.
Dana Taylor:
The public's desire to consume mass media is evident. Is there evidence of a clear desire to be able to discern between what is real and what isn't?
Sander van der Linden:
I think many citizens do have an interest in trying to become better at discerning what's manipulative and what's non-manipulative, or what's fact and what's fiction. When we've produced, for example, these educational games, we've had millions of people volunteer to go through these interventions. People are keen to find out more about how to avoid being misled online.
Of course, there's this effect where most people actually think that other people are more vulnerable to misinformation. And so what we do as part of our interventions is actually show people in a simulation that they could be targeted as well, and that they might be vulnerable because we give people quizzes and we test them and we reveal how much or how little they know.
As an expert, I'm vulnerable to being targeted with disinformation. I mean, nobody's immune. And that's what we're trying to show. And I think that intrigues people and they want to learn more about it. Some of the videos that we've done with Google and YouTube, hundreds of millions of people have engaged with them. So I do think that signals that there's an appetite.
And of course you get the criticism sometimes, "Yeah, okay, but the people who are deep into the lies, they might not be interested." And that is true and reaching out and engaging with those communities is really difficult. And we have to figure out ways to try to engage those audiences who are least likely, in this case, want to get a psychological vaccine or be interested in learning more.
But yeah, we've designed a test called the Misinformation Susceptibility Test, which is free. It's benchmarked to the US population. Anyone can take it online and they can find out how good they are at spotting misinformation or whether we need to up and boost our competencies a bit.
Dana Taylor:
I'm going to take the quiz. I'm going to take the test. Connections made through sharing information with other people on social media and being privy to smart takes are powerful. Why do you think that is?
Sander van der Linden:
I think most people, what happens is they go online and they want to have normal conversations, but they're being pushed content that is aggravating. That then leads them to figure out, "Hey, what gets traction on my feed? What should I be doing? If I quote tweet this, if I reply to this in that way, I seem to be getting traction. If I say something nice, constructive, educational, nobody's liking it, nobody's sharing it. But if I'm now starting to yell at this person, being inflammatory, all of a sudden I'm getting traction and getting more followers."
And so that's being incentivized, and that leads to a whole community of people being in conflict and following incentives that are not necessarily desirable. In fact, people have been surveyed about this, and people say, "We don't want that. We don't want that type of content." So I think it's about trying to change the incentives and the environment in which we find ourselves online.
Dana Taylor:
What are some of the indirect consequences of the spread of fake news?
Sander van der Linden:
It lowers trust. It makes people trust institutions less, the media less. They have less trust in the electoral process, less trust in each other. It sows doubt, paranoia, and it can actually lead people to just become cynical and disengaged, and then we lose political participation. We lose some of the fundamental aspects of democracy, which requires an informed and engaged citizenry.
And so that's actually particularly pernicious, some of these more indirect consequences that trickle down and aren't a direct consequence of misinformation. But maybe because you're being exposed, you're now less likely to want to vote or actually to care about the issue more generally, or maybe you start thinking that everything is false. We can't always identify that as a direct causal agent, but it adds up at a societal level.
Dana Taylor:
Do you have hope that fostering an ability to discern real news from fake news will eventually have as powerful a pull across social media as fake news?
Sander van der Linden:
We know from psychology that people do have a desire to hold accurate perceptions about the world, but we're being derailed and distracted by other motivations, whether that's social motivations, we want to look good in front of our group, or personal motivations to make money and have influence over other people. But experiments do find that if you change the incentives, you can change what people care about online.
So for example, we can have what we call credibility meters. If we give people a credibility score on their social media account that's visible to other people, this changes the game because you don't want to look like a low credibility actor. You don't want to be posting crappy content and then lose all your followers because it says that you have low credibility. And you can do this in a bipartisan way by crowdsourcing, like Community Notes on X. People can leave comments in a crowdsourced way, and that actually shows that that leads to more accurate posting and more discerning posting.
Another option instead of the credibility meter is tweaking the algorithms to try to disincentivize promoting inflammatory content and focusing more on educational or scientific content. That's often less favored because it doesn't make as much money because they lose engagement, and that means fewer dollars for social media companies. That's a difficult trade-off and I think maybe we have to accept at some level that engaging a little bit less might actually not be a bad thing for human psychology.
Dana Taylor:
I enjoyed our conversation. Thank you so much for joining us, Sander.
Sander van der Linden:
My pleasure. Thanks for having me on.
Dana Taylor:
Thanks to our senior producer, Shannon Rae Green for production assistance. Our executive producer is Laura Beatty. Let us know what you think of this episode by sending a note to [email protected]. Thanks for listening. I'm Dana Taylor. Taylor Wilson will be back tomorrow morning with another episode of The Excerpt.
电话:020-123456789
传真:020-123456789
Copyright © 2024 Powered by -EMC Markets Go http://emcmgo.com/