Podcast

Inside the Adversary Exploit Process

Posted: 13th July 2020
By: CAITLIN MATTINGLY
Inside the Adversary Exploit Process

With thousands of vulnerabilities reported and classified each year, it can be challenging to keep track of which exploits are actually being used by threat actors. Researchers with Recorded Future’s Insikt Group have been exploring this issue, and one of their key findings is that less sophisticated threat actors often resort to using older vulnerabilities with easily accessible resources and tutorials.

Greg Lesnewich is a threat intelligence researcher at Recorded Future, and he joins us with insights on the tactics, techniques, and procedures commonly seen from these threat actors, the likely motivation behind them, and what security professionals can do to best protect their networks against them.

This podcast was produced in partnership with the CyberWire.

For those of you who’d prefer to read, here’s the transcript:

This is Recorded Future, inside threat intelligence for cybersecurity.

Dave Bittner:

Hello everyone, and welcome to episode 166 of the Recorded Future podcast. I'm Dave Bittner from the CyberWire.

With thousands of vulnerabilities reported and classified each year, it can be challenging to keep track of which exploits are actually being used by threat actors. Researchers with Recorded Future’s Insikt Group have been exploring this issue, and one of their key findings is that less sophisticated threat actors often resort to using older vulnerabilities with easily accessible resources and tutorials.

Greg Lesnewich is a threat intelligence researcher at Recorded Future, and he joins us with insights on the tactics, techniques, and procedures commonly seen from these threat actors, the likely motivation behind them, and what security professionals can do to best protect their networks against them.

Greg Lesnewich:

So we've put out a report for a number of years talking about the most exploited vulnerabilities year over year based on our visibility. And we wanted to pivot a little bit from that and continue to publish that finding that other people in the industry seem to have found useful and dig into a little corner of it.

One of the things that frankly drew our team to it was that we really don't have a ton of visibility into when something goes from being an actor testing it, to it being used in an incident until it pops up somewhere else online and someone can discuss it, for a lot of that. And so where we really wanted to focus was instead of, right of boom, after all the incidents have happened and what was the most exploited, what are people trying to exploit and really the foundations of this tooling.

So we went and attempted to do that, focusing on things that were very obvious test exploit code, to get an understanding of the volume of what's being tested — things that are, I would call, very obvious tests and we have a couple of different metrics for that. We're trying to get the biggest, most repeatable data set for this. Not necessarily having the most — precise is the wrong term — but I think having a … We're not trying to spot the advanced adversary with this that is trying to subtly test their code and virus total overtime, but the average actor that is going to be involved in the average intrusion at your average enterprise. And we think that a lot of that volume data sometimes gets overlooked. So that was the spawn of this project.

Dave Bittner:

So looking at those day-to-day, I don't know, work a day cybercriminals who are out there as a regular nuisance.

Greg Lesnewich:

Yep. Correct. And previous time in SOC and IR gigs myself, I saw a lot more of that than I did of any high-end cyber criminal or APT stuff.

Dave Bittner:

Well, this report really walks through it step by step. Why don't we go through it together. Can you take us through what you've laid out here?

Greg Lesnewich:

Sure. one of the things that we really tried to understand was what makes an adversary choose a given exploit to target their spam list or the enterprise that they're interested in targeting. And we stepped through that in the manner of identifying, not only whether it's the exploit or the platform that they'd like to target, pivoting from there to understand if they're reverse engineering the platform itself to try and find a weakness in it, whether that's through comparing to a patched and unpatched version of the software, or through other fuzzing techniques, and then building something around that to exploit it. So they find an obvious weakness in this platform or the software, presumably something that's used by a whole lot of people, hence the finding most of the stuff is Microsoft and Microsoft Office-focused.

And so from that, then the adversary then builds out, which can include red teamers and pen testers, in general, security practitioners, this isn't just the "bad guys." And from there they have to then develop code that is actually able to exploit the weakness that they found, and then they have to test it. So, they have to verify that not only does their code run, but it has the expected outcome of poking a hole in that weakness to then allow whatever downstream effects they want, whether that's executing other code on the victim host, whether that's downloading another file, whether that's gaining additional privileges on the victim host.

And now, from our visibility, the code testing was the biggest area of that that we could see and really gather data on. And it has the double benefit of, it's not only the actors that go through that whole process, to identify products that have weaknesses in them and then write exploits for them. But then also people that are just trying to shoehorn already exploitive code, that someone else has already taken the time to build, into their own program.

Dave Bittner:

Well, let's dig into some of the details of each of these sections. I mean, when you're talking about identification, one of the things you point out is that quite often the criminals are a bit ahead of the game versus the general public. They have a little bit of lead time.

Greg Lesnewich:

Yep. They typically do. And some of that I speculate comes from the fact that someone will mention that they've exploited a client using a vulnerability that isn't public yet. But I think for the most part, they very much keep their ear to the ground and have their own — for lack of a better term — data gathering processes. It is frankly much simpler for a lot of them to turn something that is deemed a critical vulnerability into exploitive code than it is for a massive enterprise to put mitigations in place or patch for it.

The term that often gets used for a lot of these things is “one-day vulnerabilities.” So after a patch gets issued, actors will then go and try and find a way to exploit that. Obviously it tends to lean a little bit on the higher end skill. But we do see a number of different legitimate security companies providing their own proof of concept code for those things as well. And so we've seen with the Australian government coming out and naming ... An unnamed actor set the copy and paste campaign because they kept reusing other people's exploit code. And so they're very rapid to weaponize these things. And that speed allows them a whole heap of industries and enterprises that they can target.

Dave Bittner:

And so, to be clear, when you're talking about looking at the patches to find the weaknesses, I suppose, what you're describing is that the patches themselves are a roadmap to where the vulnerability is?

Greg Lesnewich:

Indeed. I'm trying to think of an accurate term for this. They can take the two programs and effectively highlight the differences between the code, and then in the way that people in our industry would reverse engineer malware to understand its functionality, they would do the same thing for those differences in code.

Obviously for things like Microsoft Office or the Windows operating system, these are huge files that are pretty resource intensive to have the host go through, and the term is diff, effectively to spot, programmatically spot the differences in the code, but that's effectively ... The newly patched code will generally, it won't necessarily be commented saying this is where the fix lives. It won't necessarily live in plain English, but some very clever and determined adversaries that maybe have a little bit of self hate can definitely dive in and figure out what changed and weaponize that.

Dave Bittner:

A little too much time on their hands. Well, you dig into something here when we're talking about finding the weaknesses. You talk about fuzzing. Can you describe that for us? So what are we talking about with that?

Greg Lesnewich:

Fuzzing is an automated method to effectively test how a program responds to different inputs. And that could be across a litany of things, whether how the program has started. Most people are familiar with the double click, but is it launched by a command line process instead, and all the way through all the different programmatic inputs that a file can take to see if they can generate or cause the program to crash. And typically the crash can give them some insight as to, “oh that's interesting, what was the last thing that was input automatically to test if this would crash?” It allows them to then troubleshoot and be like, “this was the problematic code. Okay. Interesting.” And then go from there and see what's handling it, that input code and figure out what allowed it to break, and then really sift through the data and find a more specific target in there.

Dave Bittner:

I suppose that every crash is a potential attack surface.

Greg Lesnewich:

Yup. Even if it is just crashing the program with relative ease, I'm sure that we've all experienced that with apps on our computers or smartphones that you learn after a couple of times, “oh okay, I can't click these two buttons too quickly after one another, because then the program freezes.” So this is just abusing that, in a systematic, programmatic way.

Dave Bittner:

Another thing that you point out here is this whole notion that there's a cost benefit factor here that the bad guys ... It has to be worth it to them in terms of, do they use off the shelf tools, do they develop their own tools, something in the middle, that's part of this whole equation.

Greg Lesnewich:

Absolutely. And I think there's a general conversation in our industry right now about open source security tooling. And I think that the challenge for defenders is that it offers these actors a free and well-maintained code base for this exploitation to happen. Obviously developing custom code is an expensive endeavor, even from just a time basis, let alone having to, if you're paying someone to do it. And so typically, you want to get the most ... You're a business at the end of the day, whether you are operating for a criminal entity, operating on your own, or operating under more targeted data stealing operations, the more easily that you can conduct something without getting caught, you're going to lean towards doing that.

The more targeted something gets, generally the more expensive it will be. And so if you don't need to use the hyper expensive thing, and you can use the free thing, maybe that even saves you developing the expensive custom exploit, it allows you to have a very big return on investment. And for the free open source tooling that's out there and some of the commercial off the shelf stuff, both in terms of general cybersecurity offensive tooling, as well as the exploit tooling, stuff that's even free out there, the return on investment is huge because they haven't paid anything for it. Other than trying to learn how to use it themselves, or implement that code base into their own malware.

Dave Bittner:

Help me understand the testing process here, because I imagine that these folks have to be careful that if they're running unproven code, if they go out into the world and try to use that on things, if the code is not correct, well, it shines a light on them. It might be that the jig is up.

Greg Lesnewich:

Yup. Absolutely. And I think part of it is verifying that their code works. And we see other actors using a test bed like VirusTotal, to determine if their code is undetectable to most antivirus companies. And in this instance, we see it much more leaning towards, does this exploit the thing that I expected to exploit or does it fail to run? And a lot of times whether you are then selling that capability downstream to someone else, or if you're going to use it in your own intrusions, you want to verify before you get somewhere, and you do all the work of sending the phishing emails and all those sorts of things that your glass cutter works. And you want to make sure that you're able to get in, in the way that you expect to, and not have to rely on, A, other tooling, or B, sit there in a panic and say, well, the rest of my plan worked and I'm just stuck here in limbo without being able to exploit my target.

Dave Bittner:

And we spent all this money on the phishing side and now there's no way for us to follow through.

Greg Lesnewich:

Absolutely.

Dave Bittner:

What are the take homes for you? What do you hope people leave with after they've read through this research?

Greg Lesnewich:

I think a couple of things, one, I hope that other folks on the vendor side or in the industry can take this as a little bit of a base to work off of. And maybe that's finding, call it sexier iterations of much more targeted malware, or particular actors that are doing this testing, through various artifacts that they leave behind in the malware. And I think from a vulnerability side, we see two pockets of activity. One of them is obviously much more recent vulnerabilities. CurveBall was obviously the most recent and the most tested among our dataset, the weakness in the Windows OS, related to how it handles code signing certificates. And the interesting thing is that we then saw a continued amount of testing of much older exploits, things that were written and released in 2012.

My big takeaway from that is knowing where to focus your defenses. You can handle an incident that's very targeted that comes in and targets the latest CVE, because there's so much data around it. And it's such a rare occurrence that you can put your best team on it, for lack of a better term, or your best analyst or whatever, inside of an enterprise. And the rest of this stuff has been around long enough, that it's still helpful to know that it's still being targeted.

But I think that that lands much more in the actors continue to have interest in it, whether that's red team, pen testers, and real adversaries. But it is right, I think, for automation in the manner of either detection. I hate to just come tell people, patch, that's the best solution. In a big enterprise that isn't always a realistic answer. Having seen an enterprise attempt, fortunately, to patch the vulnerability that ended up being used in WannaCry, the SMB vulnerability about two months before WannaCry hit, I understand the value of it, but there were 24,000 hosts that they had to patch. And it was a few months long project.

Knowing that these are the vulnerabilities that actors are going to target, there are other mitigations that you can put in place, whether that's IDS alerts, if that's endpoint alerting, if that's pulling data from Recorded Future or your preferred threat intel vendor to help you defend, or even, something as simple as a YARA rule that sets on your mailbox to filter inbound mail that looks like it exploits that code. Being able to take that big pile of old exploits that people are going to continue to abuse and put it off to the side and say, okay, we have enough mitigations in place so that we can, for these things, because we know actors still love them. And we can focus on the actors that have the time and the money to invest into more accustomed tooling.

And I think that's where it just fits into the general Recorded Future umbrella of, we are going to help you handle all of this stuff. And particularly this noisy, high signal, high noise stuff that is just breaking down, trying to knock on your door every day and help you just make sense of it, block it, and keep it from affecting your environment and providing the context that comes with it.

Dave Bittner:

Our thanks to Recorded Future's Greg Lesnewich for joining us. There's more information on the Recorded Future website. You can find the article titled, “Behind the Scenes of the Adversary Exploit Process.” That's at recordedfuture.com.

Don't forget to sign up for the Recorded Future Cyber Daily email, where every day you'll receive the top results for trending technical indicators that are crossing the web, cyber news, targeted industries, threat actors, exploited vulnerabilities, malware, suspicious IP addresses, and much more. You can find that at recordedfuture.com/intel.

We hope you've enjoyed the show and that you'll subscribe and help spread the word among your colleagues and online. The Recorded Future podcast production team includes Coordinating Producer Caitlin Mattingly, Executive Producer Greg Barrette. The show is produced by the CyberWire, with Executive Editor Peter Kilpe, and I'm Dave Bittner.

Thanks for listening.

Related