Confronting the Disinformation Age:
A Conversation with Renée di Resta

Renee di Resta
Guest: Renée di Resta
EPISODE 18: THINK BAD, DO GOOD

Confronting the Disinformation Age:
A Conversation with Renée di Resta

Jonathan Reiber, VP, Cybersecurity Strategy and Policy, AttackIQ

Renée di Resta is a pioneer in the study of disinformation, and through her research at the Stanford Internet Observatory and regular contributions to The Atlantic Monthly she has made her voice heard on the harms of amplified propaganda and the role it has in shaping public opinion.

How do false narratives spread? “You have human nature, which has not really changed very much in many ways over time, either. A lot of the kind of psychological motivators have been consistent. What do people need, what do they want, what are they looking for?” Renée investigates the intersection of platform algorithms with user behavior and factional crowd dynamics to get to the root of the problem. “What really does change is the communication technology. And when we’re talking about propaganda, which really is referring to messaging, we’re talking about ways in which entities who are trying to achieve a particular objective, use communication to send messages to the public.”

In this installment of Think Bad, Do Good, Renée and Jonathan examine the role of “filter bubbles” in the dissemination of false narratives and individual agendas, the creation of polarization in public opinion, blurred lines between fact and bias, and the growth and spread of extremism. “Another thing that we see a lot in our work is looking at what makes things go viral,” Renée says. “People make crazy claims on the internet all the time but what starts to happen is that you’ll see incentivized influencers with very large followings who will pick up that claim, but they do it in a really interesting way.”

Tune in to learn more.


Renée’s most recent articles:


“Breaking the Social Media Prism: How to Make Our Platforms Less Polarizing,” by Chris Bail

Renee di Resta

Renée di Resta

Renée DiResta is the Research Manager at the Stanford Internet Observatory. She investigates the spread of malign narratives across social networks, and assists policymakers in understanding and responding to the problem. She has advised Congress, the State Department, and other academic, civic, and business organizations, and has studied disinformation and computational propaganda in the context of pseudoscience conspiracies, terrorism, and state-sponsored information warfare.

You can see a full list of Renée’s writing and speeches on her website: www.reneediresta.com or follow her on twitter @noupside.


JONATHAN REIBER:
Hey everyone, and welcome to today’s episode of Think Bad. Do Good, published here at AttackIQ. It is my great pleasure to have my friend and colleague Renée Diresta here on the line. Hey, Renée.

RENÉE DIRESTA:
Hey there.

JONATHAN REIBER:
So, for those of you that are tuning in that don’t know who Renée is, and I’d be surprised if any of you are in that category, but for those of you that don’t know who she is, Renée is the technical research manager at the Stanford Internet Observatory at Stanford University. It’s a cross-disciplinary program of research, teaching, and policy engagement for the study of abuse in current information technologies. Obviously very important. Renee investigates the spread of narratives across social media networks with an interest in understanding how platform algorithms and affordances intersect with user behavior and factional crowd dynamics. She’s also a very prolific writer, and she publishes quite a bit. I’ll put some of the links to her work in the podcast, so you can just click on it; but she studies how actors leverage the information ecosystem to exert influence from domestic activists promoting health misinformation like anti-vax theories and conspiracy theories to the full spectrum of information operations executed by state actors, which is obviously a very big deal. She’s advised President Barack Obama, many members of Congress, and she writes for the public through tons of outlets. I saw one, The Digital Maginot Line, I will put in the link, from Ribbon Farm was the first one that I read of hers. Gosh, when was that? Like four or five years ago?

RENÉE DIRESTA:
Yes. 2018. 2019, Yeah.

JONATHAN REIBER:
Yeah, that was a big piece that she published that really got her ideas out there, but now she publishes a lot with the Atlantic and Wired, and you can find a ton of her work on the Atlantic website. And she publishes a lot of research through Stanford, which includes her new report, Unheard Voice, published jointly by Stanford and Graphika. You can find that on the Stanford website. And very exciting, Publishers Weekly has recently announced that she’s working on a new book, which I promised not to ask her about, because there’s nothing worse than asking someone about their book while they’re writing it. It’s called Agents of Influence. So it’s going to come out, what, in about a year and a half? No pressure.

RENÉE DIRESTA:
God willing.

JONATHAN REIBER:
So, keep your eyes out, folks, and we’ll have her back on if, if she’s willing once that comes out. So welcome, Renée.

RENÉE DIRESTA:
Thanks for having me, Jonathan.

JONATHAN REIBER:
Yeah, thanks for joining. So let’s dive in. You have said that propaganda has always been a part of violent conflict, which I think is something that people probably know in the back of their head, but maybe they don’t recognize constantly. And that’s, so that’s a great point. But in the modern era, it takes on a different point than it has historically. And you’ve written about the role of propaganda and history. You introduced this concept of ‘ampliganda’, and I wonder if you could just talk about that.

RENÉE DIRESTA:
Yeah, I hate making up terms. I feel like that’s one of these terrible things that you have to do in essays sometimes. But what I was trying to get at in that piece was that propaganda evolves to fit communication technology of the day and so many of the reasons for conflict have not changed in the span of human existence. Pursuit of territory, of power, of riches, these various motivations that have always been there. You have the human nature, which has not really changed very much in many ways over time, either. A lot of the kind of psychological motivators have been consistent. What do people need, what do they want, what are they looking for? So, what really does change is the communication technology.

When we’re talking about propaganda, which really is referring to messaging, we’re talking about ways in which entities who are trying to achieve a particular objective, use communication to send messages to the public.

Sometimes that’s for purposes of persuasion, right? They want to change people’s minds, they want to make them believe a certain thing, they want to create a type of patriotism, a particular type of demonization of some group. There’s different persuasion motivations there.
But another use for propaganda is activation. So just riling up the people who already believe that thing and making them feel compelled to take an action, and that’s where you start to see some of the things around revolutions, perhaps, or social movements that get violent at times or even just mass protests.

So you have these different motivations, but the thing that has changed over time is the technology by which those messages are given to the public and who controls that technology. And so, for a very long time, when we talked about propaganda, what we were talking about was those who controlled broadcast media.

And broadcast, of course, in its various forms, has evolved over time, too, right? We have print, we have radio, we have television, but who controls the means of information dissemination? Who controls that technology? It’s traditionally for a very long time, been those who already have power. Governments political leaders, authority figures, working in conjunction with media to push information to the public. I think most of the public tends to think about propaganda in the context of, like Noam Chomsky, the idea of manufacturing the consent of the governed this sort of 1980s type work that was done talking about ways in which top down messages reach the public, and how the public doesn’t fully understand the motivations behind how that information is filtered.

All of that really gets upended in 2000 with the arrival of the internet, which starts as blogs and just ordinary people posting their thoughts and doesn’t really solve distribution.

So, there’s false information, misleading information, persuasion, all that stuff is out there. But what happens with social networks is people also then gain control over the distribution channels. And this is a fundamentally different thing in history. So instead of it just being from the top down —people who control the broadcast channels deciding what the public will hear — the public actually shapes what the public hears as social media enables both creation and dissemination.

So what I was talking about with propaganda was this idea of ways in which crowd amplification, which really comes from the bottom up, sets the agenda for what people at the top talk about. So that amplification, that idea that this is what the public is talking about today, in turn shapes the media coverage of what shows up on the news at night, that’s really very, very fundamentally different.
Anybody who is influential among the public on social media has that power, has that reach of mass media that previously was really limited to a very, very small group of people. Now that dynamic is democratized.

So a lot of the work that I’ve been doing for the book, but also just in my work at the Stanford Internet Observatory, is asking, how does that change society? Right? How does it change society when anyone can do this, everyone is. You alluded to the Maginot Line, the metaphor that I was trying to use there was like this kind of war all against, all right, this, anybody who wants to put out a narrative can put out a narrative. We sit in these online spaces seeing these various factions fight each other for who is the main character of the day. What is going to trend, what are the major pundits going to talk about on Fox News or MSNBC at night? That is a very fundamentally different information environment than we’ve ever had before, and I think that that shift is foundationally transformative for society.

JONATHAN REIBER:
Yeah, I think people generally have the sense, given the events of the last five years in particular, that something is deeply wrong. But as I’m a dilettante at this issue said at best, so people think about it, they probably can’t quite pin down exactly what is happening. I wonder if you could break down some of the negative effects of this change in concrete terms for folks to think about. Like, what are some of the top negative outcomes of this, of the problem that you just outlined?

RENÉE DIRESTA:
I think that the main challenge, I want to be really clear that it’s not ‘what is the message’? There are certain messages that are obviously demonstrably bad, right? There are certain types of things that that we’ve heard over time that are calls for genocide, calls for abuse or calls for hate. That kind of stuff is demonstrably bad.

What I’m trying to get at with this shift is that the real insidious thing is that because we now all have the means for capturing attention, right? Cause that’s what is ultimately happening on most of these social channels. Everything is a battle to capture attention and so a lot of the dialogue, a lot of the discourse has really just devolved down in these online spaces into one faction fighting against another in order to capture attention.

We have created a system of incentives in which ordinary people have reach, who can make their message heard nonetheless, have to compete in this space by resorting to sensationalism, by resorting to harassment, to try to push other people out of the conversation so they can take on more share of voice. Those fights become a spectator sport, right?

There’s a kind of gladiatorial element. You see something trending on Twitter, and when you click in, it’s going to be two groups of people fighting each other, right? Because the topic is some sort of culture war, a hot button issue, some sort of flashpoint. And so you, as the even nominally casual observer receives this nudge that directs you to go pay attention to this very sensational, very polarizing argument and that’s the kind of content that we’re being barraged with at this point, that’s where the incentives have taken us.

There’s always some sort of incentive structure in communication. In the older propaganda models, Chomsky articulates the ‘five filters’ he calls them. One of which is advertising. He makes the argument that media’s not going to cover certain things because it’s not good for them because they take advertising from those industries. And so we had this understanding, this kind of intuitive sense of, in prior media environments, how those incentives shaped what we saw, how those incentives shaped what was covered. And that’s what we’re experiencing again, but I don’t think that we spend as much time thinking about how does the incentive sound? The platform’s incentive in its design structure, right? Keeping people on site, the design structure of an algorithm, something like the trending topics feature. How do the incentives intersect with the content to put us into particular mindsets and create particular types of exchanges between people. And this sort of the so-called virtual public square.

JONATHAN REIBER:
Yeah. What do you think about the filter bubble problem? Could you talk a little bit about that?

RENÉE DIRESTA:
So there’s two different sets of arguments in social science on this, but I’ll start by saying, the filter bubble was something that Eli Parser coined, I think back in 2012, so 10 years ago now. Arguing that because people were seeing personalized search results there was a potential for people to see things that were in line with their prior biases or targeted to them; and that this might lead to people moving into filters in which they and their neighbors were not seeing the same thing. So the question that he asks is, what happens? What does this do to society? And that was about search results in 2012.

Where we have gone since then is, interestingly, a lot of the social media companies, again, from an incentive standpoint, were really incentivized to grow whole new social spheres for their user base, because that would keep them on site.

So you went from bringing the social graph that you had in real life, to Facebook. Meaning, these are my friends who I go out with, I go to bars with them, and now we’re going to post our bar pictures on Facebook, which is a thing that we all find in the early 2010s; to hey, our machine learning algorithms have inferred demand and we think that because you…This is a specific example from my life. You just had a baby, you are in a baby food group and you must be a crunchy mom, have you considered going and joining this anti-vaccine group? And so, the nudge comes at you right now. It’s not an unreasonable assumption.

JONATHAN REIBER:
They really got the wrong person on that one.

RENÉE DIRESTA:
They did really get the wrong person. But I thought it was so interesting, right? Because, I was thinking about it with my user interface design and I was like, why am I getting these nudges, this is insane.

But it’s because, in collaborative filtering, I am in San Francisco, I make my own baby food, I cloth diaper my kid, obviously I must be an anti-vaxxer. It’s not totally unreasonable, but the algorithm, in fact, the platform is incentivized to give me that little nudge.
Now maybe I don’t take it. Maybe I ignore it, and then the next time the recommendation engine serves me something, it recognizes that the signal I gave back was not engaging with that nudge or maybe that was a bad nudge. So here’s this other thing. Maybe you want to see backyard chickens. That was another thing that it was pushing me for a while. Maybe you want to start a chicken coop in your backyard, hippie.

JONATHAN REIBER:
That really causes rats, by the way.

RENÉE DIRESTA:
I actually did join the Backyard Chickens group because I was like, oh, pictures of chickens! Well, what are people doing in the Backyard Chickens group? I enjoyed it until the sick chicken pictures started and then I was like, I get the fuck out of here. You know? But it was one of these things where I was like, okay, there’s this little nudge and how serendipitous is this? Let me go down this rabbit hole, and sometimes it’s just harmless and fun. Again, I had my backyard chickens for about a week, but then other times you do see how that nudge is kind of like finding social communities for people.

If you’re not making recommendations that are cognizant of certain dynamics around radicalization or around extremism, which the algorithms really weren’t at the time, you can actually kind of set people off down some pretty bad paths.
And so what happened with things like QAnon was exactly that pathway.

It was this series of nudges where the algorithm thinks it’s just bringing together all these highly engaged people who really want to talk about political things and puts them in these groups. And then we wonder why the community turns towards violence in some ways; what do we do about that after the fact? What do we do after those network connections have been made? So the interesting unintended consequences there with things like filter bubbles is that you do know there is a potential to filter and sort of put people into groups. There is some research that suggests that the filter bubble problem is overstated and Jonathan Haidt and Chris Bell have been keeping up a Google Doc, kind of a living lit review.

These are two social scientists. Jon is at NYU, and I’m trying to remember where Chris is at the moment (Duke!). But Chris wrote a book called Breaking the Social Media Prism, and his book actually goes into these questions. What happens when people are sorted and assembled into particular communities? There are some theories that very, very, very small percentages of users on social media are sorted into these groups. The problem is that a small percentage can still be a big number, right, and that a small percentage even if a relatively small number can be harmful in some ways. So that’s where the social science on filter bubbles is at. They’re probably not as big of a problem for the average person. The average person is still seeing stuff outside of their particular niche interest or niche political point of view, but for some people the suggestions go a bit haywire.

The other interesting theory is that one of the reasons for polarization, where groups actively don’t like each other, is not so much the filter bubbles. There was a new paper that came out, I think just in the last couple weeks, and I forgot the scientist’s name unfortunately, but what they found was actually this argument that’s sort of intuitive; which is that it’s actually that you are constantly seeing people who are in this opposing political class or this opposing political position, which makes politics the thing that people see so often. So, it’s actually the kind of factional battles that are happening because hyper-partisan communities are encountering each other on these platforms and clashing so it makes almost the opposite point of view: that people have been assembled into factions. We now have this as such a strong identity and the norms of behavior are that harassing and dunking and owning your opponents is how we conduct ourselves in political debate today. And so having that arena, something like Twitter or something like Facebook provides an opportunity to do this in ways that, again, in past communication environments we never really would have been able to do.

JONATHAN REIBER:
Yeah. So historically, when I got into national security, part of it was cause of the Rwandan genocide, right? And you think about Radio Milles Collines, which is the radio platform that the Hutus used to galvanize the population to commit a genocide where they killed a million Tutsis and more and sympathizers. This is the original, at least from 30 years ago, the violent filter bubble of the radio function.

RENÉE DIRESTA:
Yeah, and there was I’ve been spending a lot of time…the interesting thing about books I think is the extensive amounts of research that never actually make it into the book. I’ve read I think almost the entire of the archives, I like went and dug them up, of the Institute for Propaganda Analysis, which was this body in the 1930s, a bunch of academics actually. I was curious about in these prior media environments, what was the response, as I’ve been trying to think through, how do I write about past responses? And these were academics, and they were producing educational material for high school and middle school kids mostly, and the person whose work they were analyzing was Father Coughlin, this priest who had a radio show that reached 30 million people.

JONATHAN REIBER:
In New York, right?

RENÉE DIRESTA:
Yep. He’s very interesting to me as I’ve been writing about influencers and responses and this person who had this just extraordinary capacity. Everybody at the time largely listened to a very small number of channels, so his audience wasn’t fragmented in the way you see on social media today, Right? Where an influencer was a mass of a couple hundred thousand, maybe a million, but a million is like a big deal. But this person had just absolute extraordinary influence, and this was a group of academics who were arguing that you had to teach people to recognize the signs because Father Coughlin was moving into fascism, this sort of American fascist at a time when the Nazi party was rising in Germany. The things that they’re putting out are trying to help the public see this, see the calls for what they are, but by teaching them the skills to recognize how this rhetoric actually works.

I found it really fascinating because I think that sort of like rhetorical analysis, almost like meta education, just teaches people to understand, what are the signs that somebody’s playing on your emotions? What are the signs that you’re being manipulated are largely lost. But, if you look at how people talk today, a lot of the hyper-partisan political influencers in particular, the rhetoric is quite similar.
It’s the same. Like, “they” don’t want you to know. Who the hell is “they”? Once you kind of start to flag how these flourishes work, you hear them everywhere. I’ve actually found that process of going back and reading these old archives really interesting from the standpoint of…we think the internet didn’t cause these problems; the internet changed the problems, it changed the manifestation of how the message got there, but radio has been just incredibly powerful.

JONATHAN REIBER:
Yeah, I mean one of the data points that I like to say is that the internet is the same age as Chris Hemsworth and Nicki Minaj, and it went from zero to 5.2 billion in their lifespan. I think maybe one of them just turned 40 this year, that’s such rapid, incredible growth. So what would be some of the recommendations you would make to the world for how to respond to manipulative language? That’s an interesting point, like if you were to tell people right now, what are some of the signs that you’d watch out for?

RENÉE DIRESTA:
I think it’s teaching people. I think the glittering generality is one of the terms that they used in the 1930s, the way in which groups of people are painted with a broad brush. When you hear somebody reducing millions and millions of your fellow citizens down to one label. What is that doing? What is the point of that? What is the intended effect of the speaker when they do that or the use of words like “they”, where there’s some implied boogeyman and trying to think through who is “they”? Is this plausible? Is “they” really a plausible statement? Why isn’t the speaker giving a name? Why aren’t they articulating who is theoretically doing or saying this? Again, this ties into the generality.

Another thing that we see a lot in our work is looking at what makes things go viral. You will see small accounts will make a very, very concrete claim and then they will say something; we can use the context of an election right now because it’s top of mind for me.”There’s a suitcase outside of my polling place, that’s how they’re delivering the stolen ballots,” as a representative example. So, someone will make a claim, and this is the sort of thing that most people would just ignore. People make crazy claims on the internet all the time but what starts to happen is that you’ll see incentivized influencers with very large followings who will pick up that claim, but they do it in a really interesting way.

They don’t say anything concrete about it. They say, “If true, is somebody looking into this? Somebody should look into this. Who is investigating this? This is what the media is not investigating.” So all of those things you’ll notice they’re not actually endorsing the claim in a particular way. They’re not saying this is happening, they’re just saying, somebody should really look into this.
So it’s layering on suspicion to a claim that most people would largely disregard, but there’s almost like a short circuiting where somebody with a very large following that you’ve come to trust over time, as the person who’s putting out this message, is implying a degree of suspicion and by implying that degree of suspicion, you’ll see then, many of their followers go pick up the claim and begin to tweet it or share it, and they will actually revert back to saying something concrete about it.

Here is how X, Y, Z is stealing the election, right? So there is this interesting middle ground where the people who are most powerful in the amplification chain actually kind of take a step back and hedge what they’re saying. They don’t want to get sued. They don’t want to make a concrete claim a lot of the time or they don’t want to stake their reputation on being wrong. There is that interesting dynamic, and so once you start to see how these things are assembled, how these types of language really trigger our proclivity as people, to look at things like rumors or look at conspiracy theories and be intrigued by them and want to know more and want to share.

Again, rumors are not new. Conspiracy theories are not new. Propaganda is not new. It’s just that now we’re very active participants because we in turn get to decide, are we going to click that share button and propagate along if true? They don’t want you to know or does it kind of stop there with that influencer who’s just chosen to share it. So, I think this dynamic is really, it’s very interesting to me.

JONATHAN REIBER:
It was also with some of the things that the Russians did around vaccine hesitancy early on and what they did during the 2016 campaign where they weren’t saying vote for X person, they weren’t being directive. They’re saying, well isn’t this an interesting question? Like, who’s behind your suffering? Is it this other racial group? Or if you take this vaccine, maybe you’ll die. Someone should be looking into that. So it’s actually sowing doubt as opposed to offering a directive. Is that right?

RENÉE DIRESTA:
That’s very much how it manifests, I would say an overwhelming amount of the time. You mentioned the Russians when we looked at the stuff from 2016 to 2018. They were early on. They were trying to kind of create their own content and play both sides and boost their own messaging. But what they started to do was just pick up hyper-partisan media and influencers that were real Americans, kind of. They would sometimes… this was always really funny to me. They would slap their own logo on the meme that had been branded with some hyper-partisan American publication and so they would try to kind of grow their audiences by crib off that content.

The point is, they’re using real American narratives. It wasn’t like they had to make up something for Americans to believe. This is where we get at that propaganda of activation, right? These are people who believe this thing already, you just want to galvanize them, harness them, and then pit them against each other. You’re not trying to persuade anybody to a new or novel point of view here.

What we saw with the Russian stuff was constant emphasis on your identity as an American and what your distinct American identity was. Were you LGBT? Were you black? Were you white? Were you a descendant of Confederate soldiers? Were you a Texan? Were you a feminist? Were you the wife of an incarcerated spouse? I mean, they got real niche on some of these things and what you started to see was creating pride in all of these different identities, but then pitting them against each other.

The argument being that these identities could not all coexist in America. If your identity group was experiencing some sort of grievance, whatever that was — veterans who are not receiving benefits — rather than thinking of this as a collective problem to be solved. “The problem for veterans was that too many benefits were going to refugees,” so then you had the United Muslims page that they had that was pro-refugee, pro-Islam, and then whenever they had a handful of these veterans-oriented pages, “veterans today”, I think the name was, they would just pit these groups oppositionally because there wasn’t enough. There was a scarcity dynamic. If your group had a problem, it meant that some other group had taken something away from you.

This is not something that we could solve collectively as Americans, it meant that society had to fracture along these different identity points. So that kind of fractionalization of American identity was where they chose to go and they just did it over and over and over again on all platforms for a very long time. We were kind of doing it to ourselves. So I’m not saying that I think it was a gross overstatement to say that the Russians were the first people to realize that Americans had some fraying of the social fabric, but it was that they recognized it as a viable strategy for interference.

JONATHAN REIBER:
Yeah, it’s like a very light touch. They could just intervene with a small amount of investment and exacerbate what already existed. Right. So we’re in an election right now, the voting is already happening. Your interventions today, to some degree could probably change some folks’ minds. But if you were to think about over the course of the next, like month to six months, what are some of the trends, what are some of the trend lines, the positive trend lines that you would like to see accelerate to improve our current situation? What are some concepts that haven’t really been adopted in managing the problem that you’ve described right now that you would like to see adopted?

RENÉE DIRESTA:
So, I see my role, and in our role at the Stanford Internet Observatory as understanding what is happening and how it happens, right? Maybe why it happens to some extent. We do a lot of interdisciplinary work with people in psychology or people in rhetoric or communications, the very interdisciplinary team. But we do a lot of chronicling of what is happening, and we make recommendations for how should government think about this? The question of what government should do as a whole. We could spend an hour on that alone, because we have hit a really interesting point in society in the U.S. where trust in government is so low that the idea that government should have a role in understanding social media narratives around elections or pandemics is seen by some segments of the population as some sort of “ministry of truth” or some sort of surveillance as opposed to the government taking the pulse of where the American public is or having a role in American elections.

That has really accelerated over the last two to three years. “Should the government even have a role in this is?” has been really a bit surreal to see. What we can do as academics is we can say empirically, that this is our best understanding of where the problem is and how it manifests. We can also say, and we did during COVID and during the election, here is the specific narrative in the specific region, this is showing signs that it is being adopted, it is getting engagement that’s maybe a proxy for impact. We don’t know for sure. We don’t know if it’s, again, changing minds or just activating people, but either way, this is a thing that seems to be important in this moment.
So we can do that work, but then comes the question of what do you do about it?

There’s two time horizons, there’s the short term, right, which is when the rumor is going viral. Who is best equipped to counter speak against it, right? To say, this is what we know right now, and here’s what we think the truth of this matter is. What we saw during COVID over and over and over again, is that people who had a lot of experience would often not participate in those conversations until they had a degree of certainty. So you saw this around the masking conversation with the CDC very, very early on in 2020, or this question of was COVID airborne. A number of things where influencers, including some scientists who are just not CDC people coming out and saying, this is what we think is true.

Now, when you have a rumor going viral, you’re not always going to have a neat fact check and a certainty with which to speak against it, but that doesn’t mean that you should just let it snowball. So that question in that immediate moment is who is actually responsible for doing that counter speech? And more importantly, who is a trusted messenger capable of actually persuading or reaching the audiences that are most concerned about that particular rumor in that particular moment?

That sort of that rapid response back really requires some sort of partnerships and infrastructure where we worked with a couple groups of doctors during COVID who wanted to put out content as physicians, right? They wanted to say, I was in a group called “This is Our Shot” and they wanted to say, look, we’re a group of a couple thousand physicians and we want to show ourselves getting vaccinated. We want to answer people’s questions about the vaccine, the disease, all of these different things, but we don’t know who’s talking about what at any given time. So, how can we bridge this knowledge and that expertise and that desire to be communicators? How can we bring those things together in the short term in the longer term time horizon?

Hence my interest in the Institute for Propaganda Analysis and why I’ve been reading all this stuff. It’s because they’re doing it on a longer-term time horizon, right? They’re not trying to fact check Father Coughlin’s speech from the night before. They’re saying, we’re going to create resilience in the population against these types of demagogues and this type of propaganda by helping people understand these tricks, these rhetorical tricks, so that they can recognize them anywhere at any time.

So that’s an educational response, not something that has to be done in the moment, but something that has to be done so that as these moments continue to pop up, because they will, because the information environment’s not going anywhere, right? So, this is the new normal from a communications perspective; you have that short term rapid response thing, but over time, you’re helping the public understand what is new, what is different, and to be educated about how they as citizens in America in 2022 should think about the things that they hear and see. How does an algorithm work? Why is this in your feed? Is that trend something that is created by highly incentivized people who just want you to fight with each other? You know, these sorts of things.

JONATHAN REIBER:
I think as the last question.. you’ve been very generous with your time. I’m sort of mostly interested in how national level violent narratives spread things like a January 6th event. And so if you take that as like, there’s a lie being told that the election was stolen and it spreads online. When you look at platforms like Twitter, Facebook, others, and the development of their ability to blur the line between, to either flag individuals that are spreading mis or disinformation or to identify blurred lines between truth and fiction.
What do you think is the possibility for enforcing that kind of information at scale to prevent things? The insurrection has within it a whole other investigation where there’s elements of conspiracy, there’s national political leaders involved in spreading it; that accelerates the problems that have been in place for years before then. But if you take interventions like Donald Trump and Marjorie Taylor Green were only removed from Twitter around January 6th. They were only removed or filtered in the days immediately before, on the day or immediately after, right? But if you want to prevent these narratives that you’ve talked about from spreading and becoming a movement, what do you look for in the platforms and their ability to actually enforce the kinds of changes that need to happen?

RENÉE DIRESTA:
What you’re asking about there, it’s a really challenging question, and I’m sure you know that because it’s this question of… there are a few things that happened.

First, there’s progressive buildup over time. What kinds of speech is not incitement in the moment, but is there something that is demonizing or othering a community or population over time? Again, to get at some of the other topic areas we’ve talked about immediate incitement to violence. The platforms have rules about that sort of slow buildup, that dehumanizing language. One of the things that we saw in the Facebook leaks of the Francis Haugen papers was that they are trying to understand that internally. What are the signals that indicate that a community or group has a proclivity to violence, and what we did see was that around election day in 2020 in the days after, immediately after?

So, President Trump makes his announcement on the night of alleging that the election has been stolen and or heading in that direction. I’m not a hundred percent sure when that exact claim was made, but as he begins to head in that direction, many, many groups, the “stop the steal” groups on Facebook grow extraordinarily quickly. Hundreds of thousands of people in a day. And what you start to see is the platforms taking steps to take them down as the rhetoric in the groups heats up, and they’re caught off guard. They are blindsided by the extent to which this happened, and so you see them kind of stumbling around and being a bit reactive. I think that it’s something of an unprecedented lead up; I think it was bit jarring. I think some people who study extremism have made some arguments that January 6th was actually very predictable if you were watching certain online spaces. The platforms have tried to figure out where to strike that line between freedom of expression, your right to express anger or distrust in an elected official or an outcome, versus the extent to which a buildup of those things leads people to then go and act.

I don’t have a neat answer. I think that this kind of dynamic is the sort of thing where, unfortunately, that research happens internally, and we can’t see it from outside. We only see it when it leaks and so we don’t have visibility into those signals. When my team looks at stuff, we can see, oh, this group is growing very fast, if it’s a public group. We can only see it if it’s a public group as academics. We don’t go and join secret groups or anything like that, but we can only see these public groups growing and so we can say, hmm, it seems like this is really a thing that a lot of people are caring about and agitated about, but we can’t make a leap from that.

That is where I think these questions, again, the short term versus the long-term time horizon… I think what you’ll see during election week, about two weeks out; I think you’ll see the platforms act very decisively this time around. If there’s indications that unrest is growing in particular locales, we don’t have a national election this time so I think it’ll look a little bit different than 2020. But if there are these indications that rhetoric is heating up, I think you’ll see take downs happen. Beyond that, again, in the longer term, arguing for researcher access and things like that, so that we can, as social scientists and others, have visibility into what is happening and what kinds of momentum by which these groups grow, is something that we just don’t have particularly strong visibility into.

JONATHAN REIBER:
Yeah, you’re often looking at it historically and doing a marvelous job. I mean you’ve really done incredible, incredible work in shaping the debate and the country’s understanding of the problem set and the world’s understanding. Congratulations on all you’ve written recently, and all that’s coming down the line. Certainly everyone’s excited to see what you continue to come up with, so keep up the great work and thanks for coming on. Is there anything you’ve left unsaid that you wanted to say?

RENÉE DIRESTA:
No, thanks so much for having me.

JONATHAN REIBER:
Yeah. It’s wonderful to see you. Thanks, Renée.