March 25, 2019 • Zane Pokorny
To celebrate one hundred episodes of our show, we’ve got a special guest this week. The grugq is well-known in hacker and information security circles around the world, and a respected voice at conferences and on social media. He’s a bit mysterious, preferring to keep his real name under wraps.
The grugq joins us this week to discuss influence operations — their history, why they work, and how recent examples like the Russian meddling in the 2016 U.S. elections might be a sign of things to come.
This podcast was produced in partnership with the CyberWire.
For those of you who’d prefer to read, here’s the transcript:
This is Recorded Future, inside threat intelligence for cybersecurity.
Hello everyone, and welcome to episode 100 of the Recorded Future podcast. I’m Dave Bittner from the CyberWire.
To celebrate 100 episodes of our show, we’ve got a special guest this week. The grugq is well-known in hacker and information security circles around the world, and a respected voice at conferences and on social media. He’s a bit mysterious, preferring to keep his real name under wraps.
The grugq joins us this week to discuss influence operations — their history, why they work, and how recent examples like the Russian meddling in the 2016 U.S. elections might be a sign of things to come. Stay with us.
I’ve basically just been a hacker for the last 20 years or so. There’s not really, from my point of view, much else to say, but that encompasses quite a lot actually. I’ve been involved in reverse engineering, pen testing, red teaming, developing products, offensive, defensive, worked at startups, worked at enterprises. I’ve studied threat actors. I’ve been developing denial and deception systems. Basically, I’ve done everything in cyber that a civilian can do.
For folks who are curious, why the pseudonym? Why do you prefer to stay anonymous?
I’ve just been using it since I started and gradually everyone else has switched to using their real names, and I just never got around to it. Also, in about 2003 or 2004, a friend of mine, a reverse engineer, was giving a talk, and this guy is called Fravia, he was pretty famous back in the day. So he was giving a talk at a college. I went to go and see it, and when he came out he said, “The only people left using handles are you and me.” And a couple of years later he died of throat cancer. So, I’m the only person left. So, I kind of feel this dumb obligation to keep it.
Yeah, well it does, I think, also add probably a certain amount of mystique or swagger to your street cred.
Yeah, there’s nothing as good for your street cred as using the exact same nickname when you’re 40 as when you were 14.
Well, the focus of our conversation today is going to be influence operations. I want to start at the beginning there, looking back at some of the history of that. Can you give us a little bit of a history lesson? What is the history of information warfare?
What’s great about information warfare is, in many ways, it’s as old as language. Information warfare has been part of the repertoire of spying and espionage. As we joke, spying is the second-oldest profession. So, info war has been with us since forever.
What’s really fascinating is, if you dig into the history before it was known as info war, just regular history about military conflicts, you’ll find that what is now considered best practices and documented in the manuals is applied three thousand years ago, or two thousand years ago. So Herodotus has a story about the Athenians and how they left a message for these Greek allies of the Persians, and they wrote it on a rock saying, “We’re all Greeks, we’re friends. You don’t need to fight us so hard. Run away if you can or act sick or hold back in battle.”
When you develop a message for info war, you have to see it from the point of view of the adversary, the target, the person who is going to be reading it and consuming it. And you need to develop a message that resonates with them. That is from their point of view. It has their best interests at heart. It’s about them, not about you. All of that has stayed the same since humans have been humans. There’s nothing that’s changed fundamentally about how to manipulate a human being.
What’s interesting, of course, is now the internet has given us the opportunity to go from writing on a rock and hoping someone sees it, to internet-scale, micro-targeted info war. You can literally target your message and tailor it to every single person, and that is very, very fascinating. That’s really changed the game. One of the important ways that the internet has changed info war is in feedback. The ability of the propagandist or the person creating the message to tell whether their message is actually resonating, if it’s meeting and working with the person that is consuming it.
Historically, the way that you had to do it was you had to get someone who was, effectively, a one-to-one copy of your adversary, except he was on your side. Someone who was a German but was loyal to Britain, for example. And you’d have to get them to be creative and imagine and come up with these ideas, and then just hope that it worked.
These days you could just do A/B testing, because everything has instantaneous feedback, which allows you to actually custom tailor your message in real time. You can show up, you don’t need to know anything about your target, and you could just throw things out there, and allow the way that people engage with your message to craft your info war for you.
One of the sayings that Sefton Delmer had, the genius of black propaganda, is, “You can’t bomb a currency, but you can destroy it with a whisper.” And that has carried over. I mean there’s hard power and soft power, and soft power has stayed the same. So the whispers that they did, actually there’s a guide for how to craft whispers and rumors, and that guide, it’s completely current. That’s how to make conspiracy theories, it doesn’t need to change a word and it’s still accurate.
One of the things that the British were doing was, they were making up fake ration books. They were duplicating the ration books for Germany, and they were dropping those. And people were actually using them, and they were incredibly good. I mean there was no way to tell them apart from the legitimate thing. But what Goebbels did, because he had to counter this, was he had very, very bad fake ration books made up. And he displayed these and said, “These dumb British think that they can make this sort of junk and people would fall for it. And clearly, no German would ever do something like that. And, by the way, there’s also the death penalty, if you try.” So they countered in the same way. It was these two states talking to each other through the medium of false information.
Well, it’s interesting. Let’s pivot and talk about the interference from Russia in the 2016 Presidential campaign here in the U.S., certainly one of the major stories from the past few years. Depending on who you ask, some people say it had a very low impact, others argue that it could have compromised the entire election. What is your take on that information campaign?
It absolutely has been the coolest thing to happen to cyber in a long time. What’s really interesting about the Russian meddling, as it’s now called, the Russian influence campaign, was that initially while it was going on there were a number of people in the information security community and, I assume in other communities as well, who knew what was happening and were calling it out. But, because you can only really do so much against a state. If a state is doing an influence campaign, and you’re a few people who know the truth, you don’t really have a platform to counteract that.
So, right after the election when the Russians were like, “All right, we’re done. We don’t need to do this anymore.” And everyone started to go, “Wait a minute. Something happened.” And Facebook came out and said, “The idea that our targeted ads that are our life blood can in any way influence people’s opinions is ludicrous.”
Then it’s now, in some circles, it’s gone to this other extreme, where there’s like a Red under every bed. The Russians are all-seeing, omniscient, omnipotent. They crafted and directed everything. Whereas a lot of the campaign was actually kind of a shambles. It was very much day-to-day, skin of the teeth, thing. A lot of their stuff actually failed.
People have been talking about how there was this meme war. That these memes were like the great terrible things that we don’t know what to do with. But if you actually look at the memes that the Russians were producing, they’re not memes as we know them from the internet. They’re not jokes or reusing templates and things like that. They’re basically just straight-up World War II propaganda posters. “Hillary is bad and with Satan, and Trump is good. Choose good.” That’s not a meme. “Together we can defeat the devil Hillary.” These are slogans that you would see pasted up in a poster from World War II. This is not novel, exciting, futuristic stuff. It’s 70 year old garbage.
But how much were they taking advantage of that ability to iterate?
This is the other thing. Even though they had the capability, they did not do it electronically. They didn’t do it digitally. They did iterate, they did 24 hour iterations. They do amplification. They’re Johnny-come-latelies, they wait until something is starting to crest and come up in the target audience, and then they throw their weight behind that. So they don’t … People have said, they throw spaghetti against the wall and see what sticks. That’s not true. They look at the wall, see what spaghetti has stuck, and then they go for that.
So, they’re actually slow movers, which, despite being slow movers and coming after everything else, they actually still operate far faster than any government could respond. I mean, pick one agency that can operate on a 24 hour loop and iterate and respond at that speed.
The way that they work technically is, there’s a group of managers who set the direction. There’s a data analytics group who would look at what was going on, look at which stories were doing well, what was coming up, and present this. The managers would read it in the morning. Then they’d come up and say like, “These are the themes that we’re going to go with and these are the stores we’re going to amplify.” They distribute that and their team of 80 people would go ahead and do that for the next 24 hours until the next meeting the next day.
So it wasn’t very fast, it wasn’t very automated, it wasn’t very impressive. It’s very, very analog, but it still took advantage of that instantaneous feedback to have … The data analysts gave them that very, very fast response capability. The ability to see what was working, what wasn’t. So they took advantage of that, but they weren’t doing A/B testing, which I find very, very disappointing.
How do you then determine how successful they were? I guess part of me wonders with as tight an election as this was, as divided as Americans were, was it a situation where the Russians didn’t have to be all that good and still have a meaningful effect on things?
Right, right. I mean, unfortunately, there were so many variables involved, it’s not possible to tell whether they were the deciding factor or not. I tend to think that given how close it was, and the fact that they had an impact of some sort, they, while they might not be the deciding factor, they were a deciding factor. And yeah, they did not need to be particularly good because Americans have been polarizing themselves so well for so long that the Russians don’t really need to come in and say, “Hey, you guys should hate each other now.” They just need to come in and pick a side.
That worked incredibly well for them, and that’s part of the hold over the consequence of the political environment that’s been developing for years. I think actually one of the problems that the Russians have is that they overextended. They played their hand in 2016. The Russian playbook is now known and they were too successful. So they’ve had this pyrrhic victory. People don’t trust anything that comes from Sputnik or RT anymore because it’s now known that they are propaganda outlets. And the playbook that they’ve developed, everyone else is adopting it. So it’s escaped into the wild and people are going to use it.
So, I think they’ve actually done an own goal, in a way, they didn’t really benefit by having Trump in charge. They would have been better off with a weak Democrat than someone who throws temper tantrums and stuff. So the Russian playbook has escaped into the wild. Other people can copy it. And the ability for the Russians to reuse it has been reduced, because now everyone is alert and watching out for it. So they’ve increased the difficulty of doing it again and they’ve gotten very little for it. I think they probably are not thrilled with the results.
Has there been any effective countering of this type of stuff? I mean in the modern age, in the post-internet age, what are we seeing in terms of nation states effectively countering this?
There are two answers to that. One of them is, we haven’t seen a face-off, a mano-a-mano stare down contest between two nation states in info war. So we don’t know that yet. But there have been cases where Russia has failed fairly catastrophically. France is a good example of that. There are different theories as to why. Macron likes to say that it’s because they had this clever IT defense stuff. I say that it’s because there was basically nothing to find on the guy that no one expected to get anywhere.
Information warfare attacks take a lot of preparation time. You need to do a lot of research, you need to have a lot of material. You need to prepare all the stuff. Get your narratives ready and then build up your channels. Get your credibility and so on. And if this dark horse comes out of nowhere and you have no background material on him, you don’t have the time to develop any of those things. So they didn’t have a chance. They just didn’t have the time for it. So they did the only thing that they could do, which is, they waited until the absolute, very last minute, threw out everything that they did have, and just hoped for the best. And obviously that wasn’t going to have any impact.
What I see going forward is, there’s several things that have happened that mean that information war fights are going to happen. Right now there’s actually an info war being fought. There’s this huge cyber diplomatic information war being fought in the Middle East. And it’s really interesting, for two reasons. One is that, despite being really heated in terms of how much data they’re throwing around, the amount of hacking going on, it’s basically not having any impact. It has no effect. So the volume of stuff going on doesn’t necessarily have an impact on the visibility of the war, and so, from that point of view at least, the fight is a complete failure for everyone involved.
Why do you think it’s not having an effect?
I think Trump is sucking the air out of the room, to be honest. But the other thing is, things that are fascinating to people that live the Gulf, like this Sheik’s son is being monitored by an Israeli firm for surveillance. That’s a big thing, but try getting anyone internationally to care. The problem is that these are all regional topics that they’re trying to blow up, that they’re trying to use, and it’s not working, because within the region itself the lines are clear. You’re either a Saudi or you’re an Emirati, or a Qatari. You are where you are. You’re not going to change your mind or be polarized or whatever.
So regionally the attacks have no impact. And internationally they are not interesting enough to have an impact. The only one that did, and this one I think needs to be studied more, because it’s absolutely fascinating. And I think in a way it suggests what the future of information operations could be. It was the hack of the Qatar News Agency (QNA). And what happened was, Saudi Arabia and the UAE and a bunch of other countries needed this pretext to have a diplomatic spat and isolate Qatar. They basically wanted to say, they cut off the air space, they cut off the ports. They wanted to lay siege, basically. And they needed a pretext for that because you can’t just randomly declare all out diplomatic war on a nation for no reason.
So what they did was, they hacked the Qatar News Agency and they inserted a fake interview with a leader of Qatar, basically saying horrible things. You know, “We support terrorists,” which would cause a diplomatic incident were it to be true. And what’s really cool is the way that they went around doing it was, they took over the TV broadcasting. So QNA has … They’ve got a Twitter account, they’ve got a website, and they have a TV station. And the TV station was taken over and the chyron, that scrolling text along the bottom, was replaced to have … Basically they were inserting excerpts from this interview, saying like, “So and so says, we support terrorism.” To give the impression that this was a real authentic thing.
They also posted news articles on the website with the interview. It looked authentic, it was legitimately part of the website. It looked like this was a real story that had come out, and they sent it out on Twitter, and then they locked the Twitter so that the real QNA could not go and delete it.
So, this was a full spectrum info war. They got TV, they got Twitter, they got the website, and they took over a news station to do this. They took over an authentic credible channel.
I think in terms of the future of info war, it doesn’t need to just be a Facebook ad. You’ve got that A/B testing, micro-targeting capability with immediate feedback with an ad. But if you take over a legitimate news station, and you create a story, and you put it up in a way that it should be done, you know, there’s tweets that come out …
Yeah, give people the ability to fact check it via other channels.
Yeah, right. So you make it look like any big breaking story you’d have. The chyron saying, “Coming up next is exclusive footage of this amazing thing that will make you have a particular emotion about a particular event, or a particular person or whatever.” You put up your exclusive news article and you would put out links to it and it would look completely authentic. And what’s great these days is, people are so amazingly suspicious, so when it gets taken down, people won’t go, “Oh that was a fake thing and it’s now been removed.” They’ll go, “What are they trying to cover up?”
Right, right, it amplifies the conspiracy.
There is not going to be an easy way of getting rid of the stuff because once you’ve pushed it out via that thing, you can immediately … Anything that they do to get rid of it, they’ll say, “This was fake, this isn’t happening.” You can amplify on top of that every time it comes up. It’s an opportunity to remind people of this thing. And I think that’s going to be absolutely devastating and that’s next.
Do you suppose that any of these attempts … The organizations like Facebook or Twitter, they’re making a lot of noise about efforts to reduce fake news and filter these sorts of things out, do you think that’s possible?
One of the things that’s been interesting is, fake news turns out not to be a particularly big deal. In research, it was found that the people reading and sharing fake news were over 65, they were less than … They were a very, very tiny percentage of the population. They were hard core right-wing, and they shared the stuff back and forth between each other, and no one else really interacted with it. So the fake news thing is a bit of a red herring.
What’s more insidious with info war, proper information warfare doesn’t lie. You never lie if you can tell the truth. The trick is framing, so you frame the truth or you give a half truth, you don’t give the full context. The classic example would be, in the U.K. there’s a right-wing newspaper and a left-wing newspaper. If the air force released that 80%of smart bombs hit their targets, there would be, from the right-wing, “Unbelievable accuracy of smart bombs.” Whereas on the left-wing, it would be, “Almost 25% of smart bombs fail to hit targets.” And that’s just framing, and both of them are factually correct.
That is going to be the problem that Facebook and Twitter have, is that it’s not going to be people putting false information out, it’s going to be true information, but couched in a way that causes an emotive reaction from the target audience that is the one that the propagandist wants. I don’t see how they’re ever going to be able to deal with that, just because you need to have editors, you need to have human editors in the way. And that’s one of the things that we’ve lost.
For the last 100 years we’ve had … Mass broadcasting has been … This mass media has been the way that populations stay informed, how they get their news. That used to be divided up into, there’d be the newspaper and there’d be a morning edition or an evening edition. And that kept everyone roughly with the same level of shared knowledge. It gave people a basic set of information.
Radio would have their news hour or their news programs and this kept people synchronized with information. And then there was TV, like the news at 7:00, the news at 11:00, and so on. And again, people were synchronized in when they were receiving information and when they were learning about what was going on.
So your entire audience, which ended up being nationwide, your entire audience has the same basic set of facts from two or three possible sources. And that’s why you get like a Walter Cronkite, this person gives you the truth. That’s the way the world is and he does it at a specific time and everyone knows that what he says, that’s how it is. And these days you don’t have that. It’s been desynchronized because there’s the 24 hour news cycle of cable news where they’re just … They have to have stuff on all the time. So people are already fractured, depending on if they watch cable news, which channels they watch, what time they watch it.
Newspapers are shrinking. There’s fewer of them. You’re not getting it out of three editions every day from the same paper. Newspapers are not a great way of establishing a foundational truth for a population. And to make it worse, because the internet now has completely desynchronized people and created these social tribal groups where you associate with people, not because you were geographically located. So historically, a tribe or a village would be you and everyone that was born in that area, and your authentication for information from these people was you grew up with them. You knew everyone and whether you could trust this guy or not. But these days we still have the sense of being part of a village or a tribal group; however, we do not have that authenticity of knowing who we’re actually talking to. That’s it, we’re a global virtual village with a large number of tribes and an infinite number of villages made up of people who don’t actually know each other, who basically have to take on faith what’s being told to them.
I’m curious, what is your message to folks out there? Is there a way to inoculate themselves? Is there a way to do a better job of knowing when they might be a victim of this sort of thing? What’s the appropriate level of, for lack of a better word, anxiety to dial in about this?
That’s actually quite a hard one. So one of the easy ones is, if there’s a piece of substantive news it should occur in at least two separate newspapers. If you’re only single source, then you’re at risk. But the difficult thing is not necessarily being cautious and doubting information that is given to you by people you disagree with, but it’s the people who you agree with that are going to be the ones that push you further away. That’s how good info war works, because you’re going as part of the in group and you put your own messaging in there. So I guess the take-away message is just to be a lot more skeptical of what people say, and slow down. It’s not that important.
You don’t need to rush into things. Wait until it’s been verified and fact checked before you commit and, yeah, watch out for the people that you agree with.
Our thanks to the grugq for joining us for this special 100th episode of our Recorded Future podcast.
Don’t forget to sign up for the Recorded Future Cyber Daily email, where every day you’ll receive the top results for trending technical indicators that are crossing the web, cyber news, targeted industries, threat actors, exploited vulnerabilities, malware, suspicious IP addresses, and much more. You can find that at recordedfuture.com/intel.
We hope you’ve enjoyed the show and that you’ll subscribe and help spread the word among your colleagues and online. The Recorded Future podcast team includes Coordinating Producer Zane Pokorny, Executive Producer Greg Barrette. The show is produced by the CyberWire, with Editor John Petrik, Executive Producer Peter Kilpe, and I’m Dave Bittner.
Thanks for listening.