Unraveling Disinformation in Social Media

Unraveling Disinformation in Social Media

February 8, 2021 • Caitlin Mattingly

The last few years, and the most recent election cycle in particular, have brought forward unprecedented levels of misinformation and disinformation. This era of online disinformation bots, fake news, and interference from foreign adversaries has sown the seeds of division in our culture, much of it distributed and amplified on social media platforms.

Jane Lytvynenko is a senior reporter at Buzzfeed News, and the past several years she’s been focused on disinformation — where it comes from, who’s seeing it, how it works, and what might be done to defend against it. She joins us to share her insights.

This podcast was produced in partnership with the CyberWire.

For those of you who’d prefer to read, here’s the transcript:

This is Recorded Future, inside threat intelligence for cybersecurity.

Dave Bittner:

Hello everyone, and welcome to episode 195 of the Recorded Future podcast. I’m Dave Bittner from the CyberWire.

The last few years, and the most recent election cycle in particular, have brought forward unprecedented levels of misinformation and disinformation. This era of online disinformation bots, fake news, and interference from foreign adversaries has sown the seeds of division in our culture, much of it distributed and amplified on social media platforms.

Jane Lytvynenko is a senior reporter at Buzzfeed News, and the past several years she’s been focused on disinformation — where it comes from, who’s seeing it, how it works, and what might be done to defend against it. She joins us to share her insights.Stay with us.

Jane Lytvynenko:

I’ve been with Buzzfeed for four and a half years now. And for the entire time I’ve been focusing on disinformation and misinformation. Before that I was a media editor, a reporter, a freelancer early in my career trying to pick up things wherever they landed. But as soon as I started at Buzzfeed, we really jumped into the disinformation beat. I started around November 2016, a couple of weeks after the election and have been at it since.

Dave Bittnerr:

Well, that was a good time to get into the disinformation biz. There was a lot of stuff to dig into. I mean, has the tech side of things been interesting to you your whole life, or is it the disinformation side, that psychological part, which is attractive to you?

Jane Lytvynenko:

No. I’ve always been a little bit obsessive about topics like cybersecurity. I think back when Anonymous was making headlines that really captured my curiosity as a young student at my university. I was always curious about this not-always-seen world, I don’t want to call it unseen, but certainly not seen by everybody, of online politics, online personalities, how they play out, but also the technical aspects of things. I’ve been very stringent about cybersecurity and always really interested in that sphere. So when an opportunity presented to dig further into it from the perspective of disinformation, of course, I jumped at it because it was at that time in North America, it was such a novel direction to look in.

Dave Bittnerr:

What was it like for you getting up to speed? When you decide to focus on disinformation, you’re an experienced journalist, but you want to dig in and really focus on this one area, what is it like for you hunkering down to get all that background information? What is that process like for you?

Jane Lytvynenko:

That process initially was making a spreadsheet that almost killed me. My first assignment was trying to map out the hyper-partisan left and the hyper-partisan right universes in the U.S. And again, at that time, there were so few people focusing on this area that the research was fairly sparse. Now, there were some academics that have been studying this for years, but it wasn’t the bustling conversation topic industry that it was now. So essentially for the first couple of months on the job, I sat down, I opened Google Sheets and I looked for these websites by going down rabbit holes, whether it’s on social media or websites that refer to one another, finding their associated public accounts and trying to figure out what was happening, who the major players are, who was behind them. Was it all U.S.-based or were there foreign entities as well? Spoiler alert, there were foreign entities, Macedonians mostly, but also lots of other Eastern European countries got in on the action.

So in the end we found something like 800 websites that I think we culled down to maybe 6- or 700, but that exercise really allowed me to see what the playing field is and really insert myself into it.

Dave Bittnerr:

And then where did it go from there? You’re looking at all of this data that you’ve gathered, how do you then distill that into the compelling story that you would share with the world?

Jane Lytvynenko:

Buzzfeed is very lucky because we have a lot of data scientists and people who are used to analyzing things in bulk. So me and my colleague, Craig Silverman reached out to some of our colleagues who helped us pull down Facebook posts, posts from social media, to show what the engagement was like on those websites. The subsequent piece was titled “Inside The Partisan Fight For Your News Feed,” and essentially what we did was we looked at the engagement of what we deemed partisan media on the left and partisan media on the right. We talked about the main actors, and we explained the ecosystem that was forming, or maybe already formed, depending on your point of view on Facebook.

Dave Bittnerr:

Were there any striking things about the difference between those two elements? You say that the left and the right, I mean was one farther ahead of the other? Was one more organized than the other? Was one more successful in their messaging than the other? Or were they running neck and neck?

Jane Lytvynenko:

There were times when they were running neck and neck, but by and large, the right-wing partisan universe won out on pure engagement. Now this doesn’t actually mean that they won out on click-through rates or any other measures that we look at when we look at engagement online, because Facebook doesn’t share that data. So all we could really measure was how many times people liked, shared, and commented on this specific piece of content and that’s what we were measuring.

The right-wing sphere did win out, but that doesn’t mean that the left-wing sphere was very far behind. Depending on the news events, sometimes they were neck and neck. The important thing here is that both of them were gaining audiences. Both of them were having huge hits on social media. Both of them were having measurable engagement. That sometimes gets lost in the narrative that we have about disinformation. The fact that the engagement on the left is also significant.

Dave Bittnerr:

What were you able to gather in terms of how folks were using this information that they would get? The information, misinformation, disinformation versus the stuff that they would get from mainstream, traditional, reliable, professional news sources? Were you able to see that spectrum of how much they were dialing in one or the other?

Jane Lytvynenko:

No, and that’s a really tricky line to draw. We can’t necessarily say this person took this action because they were looking at this content. We didn’t set out to look at that really, because proving that, even anecdotally showing that, it gets very, very tricky. It gets very fuzzy. But what we were able to understand is that the business of partisan news, hyper-partisan news, and sometimes the mis- and disinformation, all of which of course are different approaches to the information environment, was incredibly profitable. Whether it’s on the left or on the right, the people who were running these websites were making a lot of money and they were making a lot of money from the advertisements that they put on their websites. They were raking in huge audiences.

It’s important to know that a lot of the stuff was not false. It’s not a fake, it’s not like a completely made up headline. There was of course, some of that, but that’s not what we were looking at. What we were looking at was partisan headlines that very clearly leaned left and right sometimes to the point of distortion. That’s a really important distinction to make because for social media companies like Facebook, Twitter, and YouTube, the flat-out lies, the completely fake headlines are fairly easy to take care of. As a matter of fact, we saw over the next year or two, I think around 2018, a significant decrease in flat-out fake headlines. But that more partisan spin, that is not something that social media companies can very easily tackle.

Dave Bittnerr:

And isn’t it against their self-interest as well? Their bottom line is based on engagement.

Jane Lytvynenko:

It is. It’s important to note here that political content is by far not the most popular content on
Facebook. If you start opening up some of these monitoring tools when Donald Trump was president, he would be up there. But otherwise a lot of the engagement that social media companies get is based on pop culture, is based on things that don’t have much to do with politics, but it doesn’t mean that they don’t make money off of the political content that’s on their websites.

Before Facebook and Twitter and the rest of the social media companies in earnest cracked down on purely fake news, or purely made-up headlines, we saw people who created those websites use Facebook’s own promotional tools. I don’t know if your listeners will remember Facebook Instant Articles, which was touted as a way to help newspapers or media outlets make money. But we did see some fake news outlets use that feature as well. So there’s a huge financial component to this, both for the people who are creating the content and for the people who were hosting it.

Dave Bittnerr:

Help me understand just some of the real basics here. How do you define misinformation versus disinformation?

Jane Lytvynenko:

Right. So originally the term fake news specifically was the most popularized term to talk about this information environment. It was popularized in part by my colleague, Craig Silverman. When he used the term fake news, what he meant was websites that are pretending to be news organizations, but instead are writing totally made-up headlines, not headlines that are spin, not headlines that are partially correct, real like “Dinosaurs Came Back to Life” fake.

Dave Bittnerr:

Right, The Weekly World News in the supermarket checkout aisle kind of thing.

Jane Lytvynenko:

Precisely.

Dave Bittnerr:

Elvis and an alien have visited my church.

Jane Lytvynenko:

That’s right. That’s right. But once the term fake news got picked up by politicians, sure most notably Donald Trump, but not exclusively him, there needed to be better language to describe what we were talking about, maybe more precise language, which is really when disinformation and misinformation start becoming more popular with reporters. They are academic terms that people who study this area have been using for a while, but it’s really when reporters start to use this terminology. Even though there is a difference between two words, they’re generally used interchangeably. The best way to think about it is disinformation is deliberate spread of false information and misinformation is mistaken spread of false information. So the deliberate spread, think Stop the Steal, for example. The misinformation, think your grandma forwarding you a WhatsApp message with bad COVID advice.

Dave Bittnerr:

I see. No, that’s a great way to frame it. What have you witnessed in terms of the growth of sophistication of these actors over the past few years, their ability to distill their messaging and really target the folks they’re after successfully?

Jane Lytvynenko:

There’s a lot that has changed over the last four or five years. It’s gone in a direction where it’s much more difficult to keep track of and report on. So once the pure fakes got out of the way, that’s where the partisanship and hyper-partisanship really flourished. So this both means news headlines from places like Breitbart, for example, that were misleading. But over the last year, we saw a popularization of an even different tactic.

That tactic is not necessarily written stories, but it’s visual, it’s videos and photos that are taken out of context, and essentially the caption on those videos or photos misrepresents what actually happened in real life. To me as a reporter, that is a particularly scary development because for almost all of us I think, seeing is believing. When you see a 10-second clip of something happening, and somebody says, “Here’s what you’re seeing in this video,” you’re going to believe it.

It’s going to take a lot of effort to explain to the person who believed in that video, that actually you should look at this video for a minute and a half because the minute and a half shows something completely different, or you should look at this video from a different angle. That type of context addition doesn’t play well in social media, doesn’t play well with the way social media works and it’s much more difficult to tackle.

Dave Bittnerr:

Yeah. I remember seeing things in this election cycle when people would hold rallies and so forth. You’d see, perhaps if someone had a rally where they were disappointed in the attendance, they would post a photograph from a completely different event that was packed full of people and they’d say, “Look how many people came to our rally?” But it was a completely different event.

Jane Lytvynenko:

There are a lot of very famous examples. This type of tactic was used very, very widely to demonize Black Lives Matter protestors over the summer. In terms of photos being taken out of context, I don’t know if you remember, but while the protests were happening, there was an entire narrative about how piles of bricks were left out on the street purportedly so that the protesters would pick them up and throw them. Of course, that wasn’t true. When you live in a city, there’s just construction happening here and there, sometimes there’s construction materials lying around, but that was a real case of somebody taking a photo saying this is actually what this is and it being very difficult to provide the additional context in that moment.

Dave Bittnerr:

What advice do you have for those of us who are out there consuming this media to best protect ourselves against this? Is there a way to inoculate ourselves?

Jane Lytvynenko:

I think there’s three things that I’d like to say on this. The first is don’t be afraid of being wrong. We all fall for false information. It happens. It’s human nature. It happens to reporters, it happens to academics, it happens to politicians, it happens to our grandparents and it happens to us. Falling for false information in itself is not a problem.

What is a problem is acting on that false information, whether that’s in real life or even just passing on the false information to your followers. Here’s the most consistent piece of advice that I give, which is be responsible to the community that you’ve built online. Whether you have 100 Instagram followers or 100,000, you’re responsible for the content that those people are exposed to. So you can turn yourself into a speed bump and try not to pass false information on to your own online ecosystem. If you do accidentally, both remove it and correct it loudly, admit to it, say that it’s false.

The final thing I’ll say is build your own online ecosystem. Be very mindful of the accounts that you follow. Be very mindful of where you get your information, really make sure that the information environment that lives on your phone, that you doom scroll through every night instead of sleeping is an information environment that is the most likely to get you accurate, up-to-date news.

Dave Bittnerr:

Our thanks to Jane Lytvynenko from Buzzfeed for joining us.

Don’t forget to sign up for the Recorded Future Cyber Daily email, where every day you’ll receive the top results for trending technical indicators that are crossing the web, cyber news, targeted industries, threat actors, exploited vulnerabilities, malware, suspicious IP addresses, and much more. You can find that at recordedfuture.com/intel.

We hope you’ve enjoyed the show and that you’ll subscribe and help spread the word among your colleagues and online. The Recorded Future podcast production team includes Coordinating Producer Caitlin Mattingly. The show is produced by the CyberWire, with Executive Editor Peter Kilpe, and I’m Dave Bittner.

Thanks for listening.

New call-to-action

Related Posts

Unpacking the Emotet Takedown

Unpacking the Emotet Takedown

May 17, 2021 • Caitlin Mattingly

The Emotet malware and cybercrime campaign recently made headlines, not for infecting victims with...

Bringing Tools of National Power to Fight Ransomware

Bringing Tools of National Power to Fight Ransomware

May 10, 2021 • Caitlin Mattingly

The Institute for Security and Technology recently published a report titled, “Combating...

Navigating the Travel Industry with Threat Intelligence

Navigating the Travel Industry with Threat Intelligence

May 3, 2021 • Caitlin Mattingly

Our guest this week is Collin Barry, Director of Cyber Threat Intelligence at Expedia Group He...