False Flags From Olympic Destroyer

March 5, 2018 • Amanda McKeon

The 2018 Olympic Games in PyeongChang recently concluded, but not without attempts at disruption from cyberattackers. A major telecom and IT provider was targeted with a multi-pronged campaign to gather credentials, move laterally within networks, and destroy data. It borrows bits of code from previously known campaigns, and was an aggressive effort to spread quickly and cause maximum damage to systems.

Greg Lesnewich is a threat intelligence analyst with Recorded Future’s Insikt Group, and he joins us to provide an overview of the malware campaign named Olympic Destroyer. We’ll get technical details, as well as a sense for why attribution is notoriously difficult in cases like this, and whether or not we’re seeing evidence of a false flag operation.

This podcast was produced in partnership with the CyberWire and Pratt Street Media, LLC.

For those of you who’d prefer to read, here’s the transcript:

This is Recorded Future, inside threat intelligence for cybersecurity.

Dave Bittner:

Hello, everyone. I’m Dave Bittner from the CyberWire. Thanks for joining us for episode 46 of the Recorded Future podcast. The 2018 Olympic Games in Pyeongchang recently wrapped up, but not without attempts at disruption from cyberattackers. A major telecom and IT provider was targeted with a multi-pronged campaign to gather credentials, move laterally within networks, and destroy data.

Greg Lesnewich is a threat intelligence analyst with Recorded Future’s Insikt Group, and he joins us to provide an overview of the malware campaign named Olympic Destroyer. We’ll get technical details, as well as a sense for why attribution is notoriously difficult in cases like this, and whether or not we’re seeing evidence of a false flag. Stay with us.

Greg Lesnewich:

Olympic Destroyer is a piece of malware that was observed, initially, by our friends over at Talos. They did some analysis on it. They got it onto our radar. It appeared to be a destructive malware that was used to target the Olympic Games IT and that sort of infrastructure, especially ticketing systems, we found, to prevent people from getting into the opening ceremony, things of that nature, to sort of ruffle the feathers as much as they could in a physical event through cyber means. It spread through WMI and PsExec vectors laterally, through networks, so it was able to propagate itself rather quickly and destroy data on the computers that it then infected.

Dave Bittner:

Can you dig into some of the technical details here, of what was going on under the hood?

Greg Lesnewich:

When Talos initially reported on it, they found the destructive malware out in the wild, I believe. It was uploaded to VirusTotal, and I believe that is how they got tipped off to it. I remember that is how we got tipped off to it. It initially acts as a dropper. The files it drops include a browser credential stealer and a system credential stealer, so that it can propagate the network. The system credential stealer operated very similar to Mimikatz, which led to some code overlap ambiguity, because Mimikatz was used in the Petya and NotPetya ransomware attacks earlier in 2017, and that sort of created this environment of, “Hey, there’s code overlap, but it’s used in a really common tool, so can we actually call this code overlap?” Additionally, there was a destructive portion of the malware that writes a bunch of files to the disk. It uses cmd.exe to delete all shadow copies using VSSAdmin, and then goes through and erases shadow copies. It disengages the WBadmin — a tool used to recover individual files and folders — so that way, Windows recovery can’t pick up the pieces, and then it goes through and disables a ton of services on the system and uses ChangeServiceConfig W API to further disable stuff.

Something that prompted our research into it … We had sort of observed from afar, and we have some customers that have interest, or participation in, the Olympic Games for a variety of reasons, whether they be nationalist and patriotic, or they are providers in a small or large capacity. What we found was evidence of hard-coded credentials in the malware from a customer of ours, that we then raised the alarm to go and investigate the malware further and say, “Well, we need to write this up for our customer and get this out before it gets released.” There’s a ton of other groups looking at the malware, and we wanted to make sure that we let them know before we did anything. The thing that originally had prompted our initial look at the malware was, Intezer tweeted something out noting that they had found code overlap with two different Chinese APT groups, APT3 and APT10.

APT3 is closely affiliated with the Chinese Ministry of State Security, and APT10 is allegedly known for its function with cyber espionage activities attributed to Chinese actors. It was a super, super small amount of code overlap, and so, that was sort of what prompted us to go and look at the malware, initially. From there, we found the hard-coded credentials of a client of ours, and that turned into a written piece, that then turned into reporting on attribution practices. Looking through our repository of malware and other things that we keep tabs on, we also found code overlap with the Lazarus Group out of North Korea — which would sort of make sense, given that the Olympics were held in Pyeongchang — and it would make sense for the average person to think that North Korea was targeting the Olympic Games to disrupt them, even though their athletes were invited.

Additionally, I think that the findings of Chinese APT code inside of the malware samples that we had access to was evidence pretty quick — especially to our lead analyst over here — that it was a false flag operation. It was sort of too much of a coincidence to have all of these different code overlaps with tools used by the Chinese, the North Koreans. NotPetya code overlap in the Mimikatz tool was in the system credentials stealing aspect of the malware, and so, that sort of raised concerns for us. If we can’t positively identify something, we shouldn’t have put something out there saying, “We attribute it to this because of a small amount of code overlap.”

Dave Bittner:

Yet, the sort of, having this different code — I don’t know, ducks in a row, if you will — seemed a little too neat for you to take the bait, when it came to attribution.

Greg Lesnewich:

Precisely. If every time we saw a little bit of code overlap, especially with all these tools that have now become public because of malware repositories, code sharing, and source-code leaking, and all that sort of stuff … It’s now relatively easy to emulate an APT. We’ve seen, in the rollover from the Vault 7 and Shadow Broker leaks, that now, traditional cybercriminals can use more advanced exploits and malware, and so the availability of high-level custom code, especially malware-related, isn’t necessarily good enough evidence for attribution anymore.

Dave Bittner:

How often do you see this sort of thing? This code reuse, this false flag sort of thing — is this becoming a standard feature of these things that we suspect may come from state actors?

Greg Lesnewich:

It’s definitely ramped up. I would pontificate that previously, it was reduced to state actors actually doing it in very, very rare circumstances. But it has now become more popular, especially in more publicized malware, and Olympic Destroyer is a good example of that. Fortunately for us at Recorded Future, we have JAGS, who is the false flag king of finding weird code overlap and determining if it was intentionally put in there, or if it was something that was stolen merely for its use, or if it was put in there to attempt to confuse investigators and forensics experts into who actually wrote it.

Dave Bittner:

Can you take us through this sort of two-pronged campaign, how there were two parts? I think there was a reconnaissance phase and then an actual destructive phase. Is that accurate?

Greg Lesnewich:

Yes. The way we broke it down is, we believe that the first phase was targeting the IT infrastructure provider of the Olympic Games to sort of get a foothold and get guaranteed access into the Pyeongchang Olympics network once it was setup and in use during the event. That foothold that we believe occurred around mid-December was then used to grant access and act as an infection vector into the Olympics network itself, and then unleash the destructive malware upon the systems. There’s no evidence to support that a phishing email or a drive-by download or an exploit was used to gain access to the network, and finding the hard-coded credentials of the IT providers sort of supports that. They were then hit first, and then those credentials were used to open up the Olympics network to the threat actor.

Dave Bittner:

Based on what you saw, how successful was this campaign?

Greg Lesnewich:

I would say it was successful for two reasons. One, it greatly muddied the waters of the responsible party, and we’ve sort of seen the fallout from that. It generated a bunch of concern and finger-pointing over who did it in the immediate aftermath. Additionally, it was, from my understanding on the ground, pretty effective in keeping people from getting into some of the events. I don’t think that this was a, “Hey, let’s bring down the entire Olympic Games infrastructure.” I think that, fortunately, that was mitigated by the teams over there. I’m not sure if that was inside the scope, but it did enough to stir the pot and get people blaming each other. As soon as that happened, it still insulates the responsible party from real repercussions, because there is that public doubt that then exists.

Dave Bittner:

Yeah. It’s an interesting dynamic, how multi-layered this is, where you have the gathering of data, you have the destruction, but then, kind of sprinkled all throughout that is the uncertainty that it causes. It makes you wonder, what was the primary goal here? Was it the destruction? Was it the gathering of data? Was it the uncertainty injected into the system and the community?

Greg Lesnewich:

Right. Agreed, and I think all those things — the muddying of the waters — plays into the hands of the threat actor, and the defenders are sort of left wondering, “Did they accomplish their goals? Did they not?” That sort of leaves the ambiguity of their actions and objectives. Were they necessarily taken and did they find what they were looking for, did they disrupt to a level that they felt was sufficient, or did they get fingers pointed at the parties they intentionally stole code from to ruffle further feathers? Not knowing their goals in such a hectic and multi-pronged campaign makes it difficult for the defenders, but I think some of that, at least in the attribution and false flag area, may have been a collateral benefit to them.

Dave Bittner:

Now, obviously, this was targeted on the Olympic Games, which doesn’t happen every day, isn’t part of everyone’s day-to-day operations. How does an attack like this inform you and the work you do, and your advice to people who are dealing with day-to-day defensive networks?

Greg Lesnewich:

I think that as much as targeted attacks occur, the vectors are always still going to be the same, and they’re, ultimately, going to be targeting the people. I think brand awareness for any large organization … If they’re a sponsor or they’re affiliated with any political or social thing on either side of it, they can be affected by something like this. We see it in the U.S. all the time, with how our political system is taking on new shapes and forms, and I think that cybersecurity teams need to be aware of what their brand is involved in and where their infrastructure is sometimes temporarily deployed to have a better understanding of the threat landscape that they’re facing. But I think that teams that are typically tasked with defending their corporate network need to have an understanding of mobile infrastructure and where the brand is internationally, and understand that, for whatever reason, they may get targeted for destructive or disruptive purposes, especially if it’s something like a World Cup, an Olympic Games, supporting one political opponent over another, and that sort of thing.

I think understanding that, and understanding that the wars, the phishing, and the intrusion vectors will very rapidly mirror that is important, because it’s really easy to change the subject line and the text in a phishing email. It’s much more difficult to pivot a SOC or an IR team to then be aware of all those changes and know to sort of look for that stuff.

Dave Bittner:

I hear people say that attribution isn’t really important unless you are a nation state or law enforcement. That while it’s natural to want to know who did this to you, when it comes to actually defending yourself, maybe it’s not a top priority — or shouldn’t be — to be wary of chasing after that information and allowing it to be a distraction.

Greg Lesnewich:

I agree. I think it’s very useful to track actors internally in terms of tactics, techniques, and procedures that they commonly use so that you can track them over time and understand typical behaviors of them, both from a proactive monitoring standpoint, and also just for clustering, to understand the different threats that are in your network. If you have, say, 100 incidents over the course of a year, and if you’re a small or medium-sized company, and all of a sudden, you’re able to cluster them based on the use of phishing emails and different uses of exploits, or tools that an actor may commonly use, then I think it’s very useful to then cluster that to understand, “Okay, this sort of stuff has been most effective against us, and we need to step up and monitor X group, Y group, and Z group better. But A through X hasn’t really had a huge effect on us, and all have been stopped at the border router or in the email sandbox.”

I think it’s useful. I think once you get beyond how they interact with your network, I think it gets a little bit diluted, and there may be a gap there of understanding, “Hey, some people might want us for our financial records, our credit card information. Some people might want our intellectual property, and stuff that may be useful for them.” So, I think knowing motives is useful to inform where and how you defend certain things, but I don’t think that chasing down the exact guy on the other end of the computer — if you’re tasked with defending the network — is something that is a good use of resources or time. It’s fun for researchers to pontificate and chase down those things, but it’s a little bit of a different ball game when your task is, “Hey, defend this network.” You’re going to have all this other work pile up if you want to go down this rabbit hole for two days and really try and find who did it. But again, it’s time and resources that not every company has.

Cybersecurity isn’t a profitable line of business unless you’re a cybersecurity company, and so, it’s all stuff that’s going to get spent and sort of be seen as a black box that isn’t really producing anything. I think that you don’t really help your own case if you jump in and say, “Hey, what have you done the last two days?” “Oh, well, I was just trying to track down the guy that sent this spam email campaign that hit our network.”

Dave Bittner:

Kind of like, if I get the flu, it doesn’t do me a whole lot of good to try to track down who I got it from, because I have the flu and I need to concentrate on getting better. Yes, it may be good to protect the rest of my family if I know Uncle Bob has the flu. I don’t want Uncle Bob visiting, and that’s who I likely got it from, but ultimately, my number one priority is getting rest and medicine, or whatever else I need to do. The fact of the matter is, I got the flu.

Greg Lesnewich:

That’s a great analogy, and I’m going to start using that for the rest of my life.

Dave Bittner:

All right. Good. Well, there you go.

Greg Lesnewich:

Thank you for that.

Dave Bittner:

My pleasure. My pleasure.

Our thanks to Greg Lesnewich for joining us.

You can read the complete report, “Targeting of Olympic Games IT Infrastructure Remains Unattributed.” That’s on the Recorded Future website in the blog section, from their Insikt Group.

Don’t forget to sign up for the Recorded Future Cyber Daily email, where every day you’ll receive the top results for trending technical indicators that are crossing the web, cyber news, targeted industries, threat actors, exploited vulnerabilities, malware, suspicious IP addresses, and much more. You can find that at recordedfuture.com/intel.

We hope you’ve enjoyed the show and that you’ll subscribe and help spread the word among your colleagues and online. The Recorded Future podcast team includes Coordinating Producer Amanda McKeon, Executive Producer Greg Barrette. The show is produced by Pratt Street Media, with Editor John Petrik, Executive Producer Peter Kilpe, and I’m Dave Bittner.

Thanks for listening.

Related Posts

Exploring the Future of Security Intelligence at RFUN: Predict 2019

Exploring the Future of Security Intelligence at RFUN: Predict 2019

December 5, 2019 • The Recorded Future Team

Just about a month ago on October 29 to 31, more than 600 Recorded Future partners, clients, and...

Threat Hunting, Mentoring, and Having a Presence

Threat Hunting, Mentoring, and Having a Presence

December 2, 2019 • Monica Todros

Our guest today is O’Shea Bowens He’s CEO of Null Hat Security and a SOC manager for Toast, a...

From Infamous Myspace Wormer to Open Source Advocate

From Infamous Myspace Wormer to Open Source Advocate

November 25, 2019 • Monica Todros

If you are of a certain age — an age where you may have spent a good bit of your time online...