Unpacking A Rumor: Kate Starbird on her Winding Path from Boston Marathon Rumors to Election Meddling

FSI Stanford
8 min readOct 3, 2019

--

Stanford alumna and former WNBA star Kate Starbird is back on campus this fall after seven years at Washington University as an assistant professor in the department of Human Centered Design and Engineering. Now a visiting associate professor at the Cyber Policy Center, part of the Freeman Spogli Institute for International Studies (FSI), Starbird hopes to expand her research on rumors and disinformation campaigns. She talked with FSI about her recent work and some of her “ah-ha” moments.

FSI: The terms “misinformation” and “disinformation” are sometimes used interchangeably. What’s the difference?

Misinformation is an umbrella term of any information that’s false. It can be purposeful. It can be accidental. We’ve seen misinformation happen for example, during a crisis when someone is trying to figure out what happened and they simply get it wrong. Or something that was true two hours ago becomes false two hours later. Misinformation doesn’t necessarily have intent, it just means false information.

There is some interchangeable use of “misinformation” and “disinformation.” At the highest level, the difference is one of intent. Disinformation is intentional misinformation. That’s the broad, general use of the word. Then there’s disinformation of a very particular type, when it’s used strategically. This usage has a connection to historical strategies employed by Soviet intelligence, and understanding those can, to some extent, help us understand modern, online disinformation.

Although the Russian disinformation campaigns took many people by surprise, you’ve been studying disinformation for a while. Take us back to that lightbulb moment, when you realized this type of study and research was something you wanted to pursue.

Around 2013 we started studying misinformation and rumors or what we call rumoring, during crisis events, starting with the Boston Marathon bombing. In the first study we did of the Boston Marathon bombing, there were six or eight different rumors that we identified, and we featured six of them in our research. One of them was a conspiracy theory about the event claiming it wasn’t what it seemed — that the media were lying to everybody, and that the real perpetrators were actually Navy Seals.

At first, we were like, “What is this?” It was kind of small. There were other rumors that were much more widely shared. It was only shared by a relatively small amount of accounts. I think it had 20,000 tweets, rather than 100,000 tweets that some of the other conspiracy theories had. Maybe it was even smaller than that. We saw it. We studied it. We featured it. It was much different than the other rumors we saw in terms of who was sharing it. The websites that the tweets were connected to were very different from the websites we normally see in our data sets on crisis events. There were these weird features that we looked at. And we wrote about it, but in essence, we thought it was really marginal. We’re like, “This is kind of weird. But it’s not really worth focusing on.”

That was in 2013. I was thinking, “It’s nothing I want to focus on.” I don’t want to give talks about conspiracy theories for the rest of my life. Over the next few years we kept studying rumors during crisis events, and kept seeing similar rumors, especially for shooting events in the U.S. They would claim that the shooting events weren’t actually happening — that they were a “hoax” staged by “crisis actors”. Or they claimed that the shooter wasn’t who the media said it was, and it was someone else. Often they would be blaming the U.S. government in some way for perpetrating the event.

We kept seeing them in our dataset, and my students wanted to study it. “No.” I said, “we’re not going to study any more of those.” But they kept finding other hoax rumors and crisis actors rumors. The students were doing the analysis I told them not to do, and they were the ones who noticed all the automated tweeting from bots that was occurring. In the fall of 2015, they brought me some of the analyses of Twitter data after the Paris Attacks, and they said, “Okay, we have to look at this. There are a lot of bots here. And there’s a weird connection between the accounts talking about this.”

One of my students created this map of structural relationships on Twitter — of who follows whom. There were connections between accounts that just didn’t make sense. There were Brexit accounts that were connected to white supremacists in Europe, which were connected to Anonymous and WikiLeaks accounts, and which were connected to pro-Trump accounts. I thought, “That doesn’t even make sense,” because at the time some of those things would have been left-leaning, some of them were far right. And, why are they all together?

We also began to recognize that the underlying themes of these crisis actors rumors were showing up in the political rhetoric of people who were gaining power around the world. This was around the time that the Brexit conversation was taking shape and Donald Trump was on the rise in the Republican primaries. And we started to see these connections between the websites that Trump and his people were citing, and that the websites that were supporting Brexit and other things were the same ones that were cited in our conspiracy theory data. We’re like, “What is going on here?” Then we began to recognize that there’s this vector of disinformation that’s leveraging conspiracy theories to manipulate political outcomes across the world.

You write that disinformation is tactical but the message itself doesn’t have to be. As we’re getting closer to the election, and we know more than we did before, are you seeing some of those similar things cropping up, or are they different from before?

It’s so difficult. We know that there are disinformation actors doing things within social media spaces, the online spaces are where people get their information. It’s really hard to take any set of actions on a tweet, or even a group of tweets, and say, “Oh that’s a Russian effort,” or “That’s a domestic effort,” or “That is an organic effort,” because they’re all so intertwined. And they all look like each other because the Russian trolls have gotten very good at imitating Americans. Organic actors actually seem to be just as troll-y as paid ones, so it’s really hard to distinguish between domestic actors, foreign actors, and individuals. Social media platforms don’t have an easy job. In fact, our most recent work talks about how hard it is to disentangle organic political activism from disinformation campaigns, because disinformation campaigns infiltrate organic activism. Sometimes individuals spreading disinformation may be sincere, but they don’t know they are essentially aiding a disinformation campaign.

Are there things the social media platforms could be doing that they aren’t?

There’s no solution where I could say, “Hey, you should be doing this,” and they would say, “Oh gosh, you’re right!” They are, however, in conversation with the research community in a way they weren’t before. They’ve hired great people with diverse backgrounds to understand how information spaces are manipulated. They are definitely thinking about the problem. But it’s a sticky problem. Plus, we (as users and information consumers) don’t want social media companies to decide what content is okay, and what content isn’t okay. At the same time, we don’t want our information spaces to be manipulated by a small group of actors who use disingenuous techniques to make their voices artificially stronger than everyone else’s. I think there’s something to be said for helping the public understand the problem, so we as consumers and as voters can help the platforms that we use, and the government that we have, make the right decisions.

In many cases, the social media companies are trying to take the right decisions and make the right changes. But if those changes happen to impact someone who has gained power through some nefarious techniques, that person might say, “Oh it’s biased.” This is something like working the refs in the sports arena. But this idea of somehow being “politically neutral” isn’t the right way to think about things, because though disinformation targets both sides of the political spectrum, it’s not symmetrical. The right approach is to be working to support authentic, democratic discourse. Let’s focus on that as a guiding value, and not say, “Okay, we’ve got to be equal between right and left.” If one group of people is manipulating the platform, and they happen to have or be pretending to have a specific ideology — far left or alt right, liberal or conservative — the platform should be able to take action against that kind of speech. Not because of their political stance, but because the techniques being used are disingenuous and disrupt our ability to understand things and to know things. If we could change the discourse around that, it would be really beneficial.

Unfortunately a lot of political messaging is based on people not really understanding what the problem is. Anyone who has gained power from the world working like this doesn’t want people to understand how they’re disempowered by disinformation in their information systems, whether that information is coming through social media or their television.

What should we be aware of when we see something being shared by friends and family that doesn’t look right?

Before disinformation became a thing, I had been studying how people correct themselves online and how they correct others when people share rumors, or misinformation. So I started to just randomly correct people on Facebook. This did not make me any friends, and I’m pretty sure it didn’t change any minds. However, it did help me to understand the limitations of that social media platform — especially how corrections get buried over time by other comments.

I think the best thing is, if you see someone in your family or have a close friends who you feel might be becoming more and more radicalized, or increasingly sharing content, I would have an in-person conversation. I don’t think that online corrections are going to be beneficial. The research suggests that it’s all contextual. Sometimes there’s a backfire effect, where you correct and then the person just becomes more entrenched. There is some research that suggests that backfire affects aren’t consistent across context. It depends on the tie you have, and the context of what kind of misinformation they shared and how you correct it. But there’s actually a lot of ambiguity in the research still. I don’t think we know what the best way to correct that is. But I do feel like having in-person conversations are often the most productive.

You’ll be arriving on campus soon after seven years at the University of Washington. What are you looking forward to about being back on the Farm? Any spots you’re going to visit?

Mostly I’m just looking forward to all the conversations that I’ll have with colleagues there — at the new Cyber Policy Center, over at Computer Science, and yes, some old friends at the Athletic Department. I’m really excited about absorbing all of the expertise from people who have been working on these problems from other perspectives. Then, of course, reconnecting with old friends.

--

--

FSI Stanford
FSI Stanford

Written by FSI Stanford

The Freeman Spogli Institute for International Studies is Stanford’s premier research institute for international affairs. Faculty views are their own.

No responses yet