Using Threat Intelligence to Battle Ransomware in the Public Sector
September 4, 2019 • The Recorded Future Team
We recently began exploring the question of why ransomware attacks against state and local governments has been trending up in the last few years. In a webinar hosted by Carahsoft, Recorded Future’s Allan Liska, a senior threat intelligence analyst who recently published a report on this topic, looked at why threat actors attack public sector institutions and examined their most common tactics, techniques, and procedures (TTPs).
Here, Allan is joined by Stu Solomon, Recorded Future’s chief strategy officer, in a conversation that explores why state CISOs and other security leaders need to better manage risk by balancing limited available resources against the need to secure their organizations from these ever-evolving threats — and the essential role that threat intelligence plays.
The following conversation has been edited and condensed for clarity.
We’d like to talk about network segmentation as a backstop to potential lateral movement. I think the first place I’d like to start this conversation — and I would love your opinion on this as well, Allan — is around basic hygiene, knowledge, and awareness of what good and bad looks like in the network in the first place.
One of the key things I always like to talk about is this notion of the “80/20” rule. It’s a knee-jerk reaction across our industry to spend 80% of your time on the most exotic, unique, or most targeted scenarios related to historical events, persistent threats. The reality is that the vast majority of the impacts that are achieved by malicious actors are taking advantage of known vulnerabilities and using tried-and-true techniques and procedures.
We tend to get the “shiny object” syndrome and miss those as security professionals. So I always like to think that one of the most important things we can do is take a step back and say, “What am I doing on the basic blocking and tackling, or the basic hygiene of my environment to get basic visibility, to have a solid patch management process and plan in place? To be able to gain correlation and visibility across the common logs that I should be looking at? Firewall logs, VPN logs, DNS information, telemetry data coming off of my endpoints?”
The first, most basic notion I would rely on is to be able to get that baseline of understanding, and then, using a myriad of information-sharing and intelligence techniques, start to build whitelists and blacklists — to limit the amount of traffic that’s coming in to the “known good,” or to exclude the “known bad.”
Allan, I’m curious, do you have any other thoughts around this first pretty basic mitigation step?
Generally speaking, we’re talking about cybercriminals here. While they are more advanced than, say, your average script kiddie, they’re not using any zero days or anything particularly advanced in order to launch their attacks. For instance, right now, there’s a lot of concern around bad guys scanning for BlueKeep vulnerabilities, but we haven’t seen the mass-marketed BlueKeep exploit yet. In other words, not one that’s widely deployed and readily available. So when they go after our remote desktop protocol servers, they’re just looking to reuse credentials or looking for common credentials, or credentials that haven’t been changed in six years.
Then, when they’re moving around the network, they tend to use Mimikatz, or active directory, as a way of delivering ransomware. So again, nothing exotic in their deployments. They’re using standard system-admin tools that if you’re not using in your organization, you can disable, or you can alert on, or you can block, if necessary.
That’s right — there are all kinds of politically motivated, or ideologically motivated, or nation-state-motivated attack patterns and triggers that are in our environment today.
The reality is, with ransomware in particular, the impetus behind this particular application of the technique is really around economic gain. We see this historically over time from the very first attacks that require you to literally fill out a handwritten ransom check and deliver it by mail, to today, where they’re using cryptocurrencies to be able to pay off the ransoms.
This is a financially motivated attack, and in some cases, there may be some antisocial implications behind the attacker’s motivations as well, as they’re trying to deny or disrupt a particular environment. This is really not a technique that’s associated with nation-state activity, per se.
Having said that, one of the key things that Allan just mentioned that I think is particularly critical for this audience to consider is that regardless of your motivation — whether it’s strategic information advantage that you’re trying to achieve through espionage techniques, or it’s denial or destruction or degradation of a network’s capabilities to be able to do, or it’s economic in nature, like in this case around cybercrime techniques — the reality is that regardless of how sophisticated your malicious actor or adversary is, ultimately, they’re going to do what works. They’re going to go after the environment in a way that’s tried and true, where they could generally obfuscate the identity of the perpetrators of the activity, and they want to create the outcome. So they’re not going to default to the most advanced techniques unless they absolutely have to.
Now, the second piece that I’d like to talk about for a second is also relatively intuitive, but remarkably difficult to actually pull off in reality. That’s having a solid backup plan and disaster recovery plan. You can’t be susceptible to the impact if you’re able to harden your environments against them. In this case, it’s all about having a multi-layered backup scenario associated with your data stores. And just as important is understanding what data elements are most critical. So having a good data classification and data and asset criticality plan that builds into the type of redundancy that you build into your overall architecture is an absolutely critical part of this.
So while we’re talking about a prolific cyberattack scenario, the reality is that the answer is predicated on what you would normally do in a disaster recovery and business continuity planning effort that would not necessarily take into account this particular attack pattern.
The way I like to think about it is this: take a page from non-cyber-related disaster recovery planning, this idea of “all-hazards” planning. In all-hazards planning, there’s the concept of not worrying about the impetus, but worrying about the impact. If the impact in this case is about the denial of availability of your data resources point in time, then plan around what that potential outage or disruption may look like. So trying to shrink the problem into something that we understand how to solve for is a key part of the protective, layered defense that we want to build here.
Allan, I’ll turn to you. Anything you want to add on that point?
I mean, that’s the biggest one. The time to check whether or not your backup or your disaster recovery plan works is not after you’ve had a ransomware attack. That constant checking and verifying that your backup is working the way it’s designed to, and that you can recover in a timely fashion, becomes so important.
I’ll say this: we had one example, one town that we talked to, where they wound up having to pay the ransom because one of the things that was hit was the police body cams — and they had a good backup plan, but they found out that they couldn’t for whatever reason recover the files from the police body cam. So they wound up having to pay the ransom to get those back.
So make sure not only that you’re checking, but you’re checking everything in your disaster recovery plan. I mean, you don’t have to check everything all the time, but you have to do regular spot checks of different data types to ensure that they can be effectively recovered.
Yeah, that’s a great point, Allan. And that’s not exactly the kind of impact that we think about when we think about a cyber-motivated attack. Back to your point earlier, given the fact that this is more of a ubiquitous or random attack and less of a targeted attack, the attackers probably didn’t contemplate the chain of evidence disruption that they were going to cause in law enforcement when they sent out their malware.
The other interesting thing that struck me as you were looking at your sample set of about 170 different attack surfaces that had fallen victim over the last three years to ransomware was that about a third of them were law enforcement activities, and a little less of that percentage were schools and universities. So there really isn’t a single attack surface that falls victim to this — it’s really rather ubiquitous across the board. That’s also something to think about when you think about the disruption that this can cause and the spot checks that Allan suggests.
Another component to think about, and Allan, you alluded to this earlier, is this idea of segmentation that I think is particularly important. So looking at your network and understanding what the interconnectivity is — particularly between your connected devices, your IOT devices, your traffic cameras, your security cameras, your traffic signs, your traffic signals, just as examples — alongside your infrastructure for providing for enterprise information management and everything in between. Are you able to get in and create a lateral movement scenario that would significantly increase your surface area of risk when a particular attack occurs?
Understanding what that looks like and building in the segmentation necessary from an architectural perspective to be able to limit access and ultimately limit impact is a strategic application of the intelligence that you would want to be thinking about when contemplating how a particular technique may impact your environment.
The second one — equally important — is this notion of role-based access. Segmentation at the network layer is going to be the technical component, but from the human component, going back to the notion that all data has value and there’s some element of data that should not be exposed to every individual that has access to it. Implementing least-privileged access principles, and implementing role-based access to better understand if a person’s job, role, or function should or should not have access to a particular data element, will help you to further reduce your overall surface area at risk, which is critical to being able to both mitigate as well as to prevent potential impacts from attacks.
I think that’s critical. I know it sounds like we’re being a little bit defeatist, like, “Okay, expect an attack to happen and limit the damage,” but you do have to prepare for those.
Again, using an example that we saw in talking to some of these cities and states that were victims, in one case, we had somebody from the information technology department say that there was one person in the court system that needed to be able to access one file system over in the police department on a regular basis. Well, rather than just give that one person access to that one server, they opened up the entire court system, that whole network of the court system, to the entire police department that work. Which meant that when ransomware hit the police department, all the files in the court system were also encrypted, which meant that dockets were down for days and things like that.
So not understanding that, yes, it’s more difficult, yes, it makes the network more complex and complexity is never fun to manage, but it also makes you more secure to do these kind of activities where you’re limiting access strictly to the people who need it and only to the systems that they need.
Carahsoft is a leading provider of IT solutions for government agencies, helping them implement the best solution at the best possible value. Watch the full webinar here to see the entire conversation between Allan and Stu, which includes a more extensive discussion on the topics covered here and a question-and-answer session at the end.