Unwrapping Fishwrap, a New Social Media Misinformation Methodology

July 1, 2019 • Zane Pokorny

Researchers at Recorded Future have recently detected and described a new kind of influence operation that they’ve named “Fishwrap.” The technique involves recycling previously published news accounts of terrorist activities and amplifying their exposure through social media, with the apparent intent of sowing the seeds of distrust and unease.

Our guest today is Staffan Truvé, CTO and co-founder of Recorded Future. He’ll describe the tools they used to uncover the Fishwrap campaign, the conclusions they’ve reached from the information they’ve gathered, and the ways we can all prepare ourselves to spot them.

This podcast was produced in partnership with the CyberWire.

For those of you who’d prefer to read, here’s the transcript:

This is Recorded Future, inside threat intelligence for cybersecurity.

Dave Bittner:

Hello everyone, and welcome to episode 114 of the Recorded Future podcast. I’m Dave Bittner from the CyberWire.

Researchers at Recorded Future have recently detected and described a new kind of influence operation that they’ve named “Fishwrap.” The technique involves recycling previously published news accounts of terrorist activities and amplifying their exposure through social media, with the apparent intent of sowing the seeds of distrust and unease.

Our guest today is Staffan Truvé, CTO and co-founder of Recorded Future. He’ll describe the tools they use to uncover the fish wrap campaign, the conclusions they’ve reached from the information they’ve gathered, and the ways we can all prepare ourselves to spot them. Stay with us.

Staffan Truvé:

So it’s actually a bit of an interesting story. The way it came on the radar was that we’ve been developing some new machine learning-based methods for detecting events, for example, terror events, and we did that. We did quality control in our testing and we … One of us actually completely accidentally, we found that there were some terror events which were only reported on social media and not on any mainstream media. And as we started digging in we saw that this was not a single thing. There was actually a long going campaign which had been essentially posting old terror news as new and only doing it on social media.

Dave Bittner:

Well, let’s walk through this from the beginning. First of all, can you describe for us … What are we talking about when we say influence operations?

Staffan Truvé:

Right. So influence operations, some people think of this as something very new but of course it’s been going on for hundreds of years. Essentially part of any war or political campaign people try to influence the other side to make them think differently about things. But of course in recent times the Internet has provided a much more powerful platform for doing this at a large scale. And of course this got a lot of attention in the 2016 U.S. elections and we saw that at a smaller scale in the 2018 elections. But in general for us an influence campaign is when there is an organized attempt to change people’s minds, or opinions, or behavior.

Dave Bittner:

And that is what it seems like you’re tracking here.

Staffan Truvé:

Right. And I think it’s worthwhile saying also that it does not necessarily have to be a political campaign. There were examples historically when people have been trying to manipulate stock prices of companies by spreading misinformation. That’s another kind of campaign. But in this case, what we’ve seen is a campaign which, essentially … It seems to be a campaign which is focusing on spreading fear, uncertainty, and doubt, at least on the surface. We can come back later to alternative interpretations. And what they’re doing here is they’re trying to make it appear as if there is essentially constant flow of terror events in Europe.

Dave Bittner:

Well, let’s walk through this together. How does one of these get spun up?

Staffan Truvé:

Right. So the way this works … And actually let us go back to what we have now identified. So as I mentioned, we started off by seeing a few random accounts which were spreading this kind of information. And when we went back and looked at this in a more organized way using some of the new algorithms we’ve developed, we found that this was indeed a more large scale coordinated campaign. So it began in March, April, in 2018. What we saw was that there were maybe a few dozen accounts who started spreading these kinds of news and that then it grew in volume.

Many of these accounts were then shut down in September, October timeframe. But then we saw some renewed accounts popping up a couple of months later and most of them are actually still ongoing, I should say, ongoing up until a couple of weeks ago when we released this report. Because after that essentially all of them seem to have been closed down.

Dave Bittner:

Now when you say, “closed down,” do you suspect that they closed them down themselves or did the social media provider close down the accounts?

Staffan Truvé:

It’s essentially impossible for us to see that. But I think the fact that this came right after our disclosure makes me believe that it was the platform which actually decided that, even though these accounts were not technically violating the terms of use, they were in effect spreading information in a misleading way. So I would suspect it’s the platform which actually decided to shut them down but we cannot really see that from the outside.

Dave Bittner:

So just to be really clear here, the type of information that they’re posting, these are real events but from the past?

Staffan Truvé:

Exactly. So what you would see would be a typical posting relating to say … Concrete example, a terror event which takes place in Paris. This was posted in, maybe April this year, and there is no time indication in the actual posts, which of course for the casual observer it makes it appear as if it happened right now. And then there’s a link back to another news article. And that article will talk about the same event, but if you look a bit more carefully you’ll see that it’s actually published three or four years ago. So as we said, they were reusing old news to spread this, and that’s why we dubbed the operation Fishtrap, as they were essentially recycling old news in a clever way.

Dave Bittner:

The suspicion here is that this is really just to make people uncomfortable, to put them at unease.

Staffan Truvé:

Yes, that was at least our initial judgment. As we’ve looked more into it, the alternative interpretation is of course that this is … Let’s say that they were grooming accounts, say they were establishing these accounts, building a follower base, and then of course they could have planned at some future time to actually use these accounts to spread maybe some specific message or actual fake news. Again, we are not seeing that actually take place, and now that most of it seems to be shutdown we will probably never see it.

Dave Bittner:

All right, well let’s dig in here. Walk us through how you were able to connect the dots between the various accounts.

Staffan Truvé:

Yes, that’s actually an interesting story. So as I mentioned, we started out by seeing that there were a number of posts from a number of accounts which were relating the same news. We saw that they were linking to the same URLs. So this is part of what we call our snowball algorithm, that you start by a seed, which could be a specific account, or a specific post, or a specific URL actually, and then what you do is you essentially roll your snowball. So if you have an account, you can find all the things which they have posted about, and then you can find other accounts which have posted the same thing, which will in turn lead you to new posts by those accounts. And as you can see, you’re getting more and more things here. So that’s the first part of what we call the snowball algorithm, it’s getting a large number of accounts apparently posting about the same thing.

Of course what you’ll get then is quite a few false positives, because you will then catch all the innocent bystanders who have decided to repost this. So the next step is to try and identify the accounts which are actually core to the operation. And the method we developed for doing that is to look at what we call behavioral similarity. So we’re looking at a number of different, let’s call it, behavioral aspects of the accounts. So you know one example could be when they were created or when they were shutdown. Another one can be if they are using specific sets of hashtags. The third one, which we had in this case, was a bit special because we actually noticed that these accounts were using a family of special URL shorteners.

At first we saw that there were roughly a dozen different URL shorteners. But then interestingly, when we dug deeper into this, we realized that even though those were different URL shortener services, they were actually clearly using the same code, which you could see if you went to their homepage. They looked exactly identical, and these services were also all hosted on Azure, so we had a platform hosting it which was the same. They were all anonymously registered, so we have no way of actually finding out who’s behind those accounts, but there was no doubt that not only was there a family of URL shorteners, there was a large number of accounts which were using this family.

So in that way we were able to clearly identify that they were all related, and then we could see that they could be separated into essentially, you could say, three clusters. One where those early accounts were operating from spring 2018 up to around fall, another one from the fall until now, and there were a couple of accounts which had actually been active for the whole time period.

Dave Bittner:

Now looking at those URL shorteners is the … Are you trying to connect the dots that they may have been part of this, that they may have been spun up by the same folks who are spreading the stories, or is it a coincidence that they’ve just chosen to use these publicly available URL shorteners?

Staffan Truvé:

Well, when we look at these URL shorteners I would say there is no doubt that there is a clear connection between them. We saw no other posts essentially, except for reposts, which were using these URL shorteners, and also the timeline. So the domains of these URL shorteners were registered a week or so before the first batch of accounts became active. And the second batch of accounts were related more closely to another set of URL shorteners activated right before that. So in my mind there is no doubt that these accounts and those URL shorteners are all part of the same operation.

Dave Bittner:

I see. So let’s go through … You’re referring to this as Fishwrap. Take us through some examples of what you found.

Staffan Truvé:

Right. So as I said before, the typical example was that we saw … As we dive into these … What we saw that there were a couple of different clusters of accounts. So some of them were posting about … they were all actually, I have to say first of all, posting about terror events in Europe, essentially in three markets if you’d like. So they were related to U.K. terror events, one to France, and one to Germany. And the account names had … Even though the account names interestingly enough all appeared to be registered in the U.S., they actually all had names which would give you the idea that they were somewhat local. So the ones tweeting about German terror had names like The Father Land or something like that, so they would give sort of an association with Germany, and similarly the U.K. ones. They were called London Lads and things like that. So in that sense, they were clearly trying to make it seem as if it was someone who was local, who knew that these things were actually happening, and posting photos. So it was, I would say, a well orchestrated campaign in that sense.

Dave Bittner:

And one of the things that you were tracking here was what you refer to as temporal behavior. What’s going on with that?

Staffan Truvé:

Right. So since we track things over time, we could actually see two kinds of temporal profiling here. One is when they were created and when they were shut down for whatever cause. But we also looked at the activity pattern during the period they were active, so we could actually see that several of them had holidays at the same time. So the accounts which became active in October, November actually were all inactive from mid-December to January first, so you could see that they seemed to have a long Christmas break, if you like.

Dave Bittner:

And another thing you tracked was the URLs and the domains that they used?

Staffan Truvé:

Yes, exactly. And I said linking back to the URL shorteners … and also an interesting thing we really like to stress here is that these URL shorteners, we have not seen any truly malicious activity. We have not seen any traces of them spreading malware or anything like that. However, they all contain a fairly simple, but very efficient tracking mechanism so these guys who are running the operation could definitely keep track of how many were reposting or clicking on the links in these posts.

Dave Bittner:

And you were able to track, I guess the general success, how many times the posts were mentioned and the amplification factor?

Staffan Truvé:

We have partial information of that. Again, we don’t collect everything, even if we try to be ambitious on that side, so we have partial information. We could see that they had a fair number, not magnificent numbers, maybe from a hundred to a few thousand followers, so nothing really huge in that sense. So depending on what their goals were you can’t really tell how successful they were. But if you have the hypothesis that they were nurturing these accounts for future use, maybe that would have been enough for them as a first step, because of course if you have your followers, what they are really useful for is an amplifier in the next stage. If you want to send out another message you have one account which you post on, and then if you have … Let’s say just a few hundred followers, which in turn have many followers, you get the multiplicative effect of spreading your message.

Dave Bittner:

Yeah, that was actually … I wanted to ask you about that. I mean how does a newly spun up account like this on social media … How do they go about pulling in the folks to amplify their message who aren’t bots, who aren’t part of the program? What are the tactics there?

Staffan Truvé:

Well, I mean, I guess you have to figure out what will make people connect with these guys and so on. And so of course, by mentioning names of places, I mean, if you mention a city like Paris in a post there would be many who actually have following set up so that if any post is related to where they live, or where they have their market and so on, they would notice that. And we also don’t know, maybe they were even buying followers here so that they would be able to amplify their network through that. Again, the focus for us in this case has really been to understand the mechanisms, and to validate that this is actually part of one coordinated campaign.

Dave Bittner:

How many accounts have you tracked here? What’s the general sense of the scale of this?

Staffan Truvé:

So overall, if you look at these accounts using these services, there are thousands of them. What we’ve seen is somewhere between 200 and 300 accounts which have been clearly identified as belonging to the campaign in terms of having been active over a long period of time and having been linked to each other by using the same service and posting the same links, and as I mentioned, the temporal behavior as well. So it’s not a magnificently large number, but again, maybe for whatever purpose you’re happy with, maybe that would be sufficient. I should also say that we have not strived to get as complete coverage as possible. I actually had another journalist who I talked to about this a couple of weeks ago, who when he went back and started looking himself he could, in a couple of hours, actually identify a bunch more accounts which were clearly part of the same campaign.

Dave Bittner:

And you mentioned earlier that when you had published your research many of these accounts had been shut down. Is there any sense that these sorts of operations are growing in scope, or what’s your sense there?

Staffan Truvé:

I think the most interesting thing is … I don’t have a clear answer to that. I would say the more interesting thing is that in this case when we weren’t even looking for this, really we stumbled on it, and as we started unwinding it we saw that there were so many. So it makes you wonder how many of these kinds of operations are actually out there. And I think we were a bit anticipating that this campaign could be related to the European Parliament elections, which we had here month or so ago, but they’ve actually … They were active after that as well, so no clear correlation there.

I think what we’re going to do going forward here is to continue developing these algorithms, and especially not to have to rely on the manual identification of the seeds, but actually doing larger scale anomaly detection. So going forward if we see a significant event being reported, say only on social media and not mainstream media, or if it’s only reported in certain languages and not others, even if it’s a big event, we will automatically be able to detect these as starting points for the snowballing algorithm. And of course, one of the plans we have here is to keep a close eye on this for the 2020 U.S. elections, which we’re expecting will be … there is bound to be significant activity of this kind in those elections.

Dave Bittner:

So what are your recommendations, and what are the take homes here? What are the things you’ve learned, and for folks who are out there keeping an eye on these sorts of things, what should they know?

Staffan Truvé:

Well, I mean, first of all, if you look at the platforms themselves, I think they need to become better at identifying these kinds of campaigns. I’m sure that all the social media platforms are doing a lot of work right now, but I think maybe what makes us able to do this kind of detection is that we’re not only harvesting a specific platform. We’re doing very large and broad collection, so the fact that we can detect this, thanks to the fact that it’s mentioned in one kind of media and not the other. If you’re only monitoring your own platform you would never see that kind of a discrepancy between reporting and different sources. So I think that’s one thing to keep in mind here that you need to do: have a broad picture if you want to detect these things.

The other one I would say is the useful message to every one of us is to be critical when we see news, to check links, to validate … That when you see something that it’s actually not some old thing being re-posted. You know, it’s very easy when you see things quickly glancing at your phone to believe things. I think that’s very human of us. But actually even being increasingly on our toes in terms of being critical to what we read is important.

Dave Bittner:

Yeah. I mean it strikes me that, even just that initial sense of unease, of keeping people leaning back and off balance, even if they go in and check the date and realize that, “Oh, this isn’t something that’s current,” the mission has been accomplished. They’ve altered the mindset of that person even if just for that little moment

Staffan Truvé:

You’re quite right, I think even if you’re doubting that specific news, maybe you’re … it still sits in your head somewhere … And everything we do. And the other thing is of course that you could become generally more skeptic to news overall which is another effect you might achieve by doing something like this.

Dave Bittner:

Our thanks to Staffan Truvé for joining us.

The research is titled, “The Discovery of Fishwrap: A New Social Media Information Operation Methodology.” You can find it on the Recorded Future website.

Don’t forget to sign up for the Recorded Future Cyber Daily email, where every day you’ll receive the top results for trending technical indicators that are crossing the web, cyber news, targeted industries, threat actors, exploited vulnerabilities, malware, suspicious IP addresses, and much more. You can find that at recordedfuture.com/intel.

We hope you’ve enjoyed the show and that you’ll subscribe and help spread the word among your colleagues and online. The Recorded Future podcast team includes Coordinating Producer Zane Pokorny, Executive Producer Greg Barrette. The show is produced by the CyberWire, with Editor John Petrik, Executive Producer Peter Kilpe, and I’m Dave Bittner.

Thanks for listening.