Protecting Critical Infrastructure
Our guest today is Joe Slowik. He works in adversary hunting and threat intelligence at Dragos, a company specializing in securing industrial control systems and critical infrastructure. He shares the story of his unconventional path to a career in security, including time in the U.S. Navy and at Los Alamos National Labs, where protecting scientists, engineers, and researchers presented its own unique set of challenges. He shares his informed opinions on threat intelligence, with tips on how, in his view, many organizations could benefit from adjusting their focus and their approach.
This podcast was produced in partnership with the CyberWire and Pratt Street Media, LLC.
For those of you who’d prefer to read, here’s the transcript:
Hello everyone, I’m Dave Bittner from the CyberWire. Thanks for joining us for episode 63 of the Recorded Future podcast.
Our guest today is Joe Slowik. He works in adversary hunting and threat intelligence at Dragos, a company specializing in securing industrial control systems and critical infrastructure. He shares the story of his unconventional path to a career in security, including time in the U.S. Navy and at Los Alamos National Labs, where protecting scientists, engineers, and researchers presented its own unique set of challenges. He shares his informed opinions on threat intelligence with tips on how, in his view, many organizations could benefit from adjusting their focus and their approach. Stay with us. Joe Slowik:
I came to Dragos from Los Alamos National Laboratory. I was running the incident response team there for a few years, so, working the Department of Energy mission. Providing security for a nuclear weapons laboratory is always an interesting task. I came there from the Navy. I was an information warfare officer for a little over five years, doing a number of different cyber missions, depending on how you want to define that. Prior to the Navy, I kind of bounced around a little bit, and really, in terms of ultimate background, I don’t have a computer science background. I was originally a philosophy PhD program dropout before I started bouncing my way and finding myself in the information security field, so, not quite the direct path that you might be used to with some other individuals.
Take us through — what was the decision-making process? Dropping out of school, as you described. What led you to the military?
So, after I got out of school, I worked in construction for a little bit and then landed a real job with a company in suburban Chicago called McMaster-Carr, an industrial supply company. I got into doing analytics work and after a while, that’s how I picked up my computer skills — was learning how to do that and automate more and more things. But I got bored with office work, and I just wanted something different, so I joined the Navy thinking that, “Oh, this’ll be more exciting.” And when I got in and got to my first duty station, they looked at my background and went, “Oh, you have a computer background. You’re going to go sit in an office and do computer stuff.” I’m like, “Oh, that’s exactly why I joined.”
You were going to see the world, right?
Exactly. So, that was initially kind of frustrating, but in hindsight, very good, because that’s where I really started to get a very robust background in different aspects of security. I did that for a while and learned an awful lot. I did a fair number of interesting things. When I got out, I still wanted to stay in government service, and that led me into the Department of Energy at a pretty interesting location, located in the mountains of northern New Mexico, working the explicit Blue Team mission for Los Alamos. Dave Bittner:
What were some of the specific challenges you faced there, at Los Alamos? That’s certainly an interesting place to be.
Yes, it’s a very interesting place to be. It’s a very interesting security position to be in, because Los Alamos is almost like taking a super secret nuclear weapons design laboratory, bolting it onto a major research university, and covering the scope of security problems that you would find at your three-letter-agency-style locations. But also, having some of the same issues that you have, dealing with professors and undergraduates trying to get their work done and not having the least bit of concern for, like, “Well, I’m a scientist. I want my data to be free. Why should I mind if anyone’s stealing it?” Like, “Oh, that’s not how this works.”
Right, right. That’s an interesting natural tension that’s set up there. How did you contend with that?
One of the main issues I have with the security industry, especially from the defender mindset, is the “stupid user” problem. That, “Oh, if only the users weren’t so stupid, we’d be better off.” But, well, don’t look at it that way. Try and build awareness for why it is we do the things or ask the things that we do. Try and put yourself in the mindset of the end user, that at the end of the day, he or she is not being deliberately malicious — usually. There are exceptions, of course. But in most cases, they are just trying to get their job done. And so, if we start layering controls or different ways of doing things that are an inconvenience and make it hard for people to get their job done, they’re going to look for a way around it, just to do their thing. Trying to reduce that friction, that tension, as much as possible, whether from a design perspective … Like, locking websites willy nilly isn’t necessarily going to do good things, because if you start taking out resources that people actually care about, or do so for reasons that are obvious, people will try to find another way of doing this, and probably in a way that you lose some aspect of security or visibility.
And also, trying to build awareness when you have interactions with people, to explain why this is a phishing message, or, “I’m sorry, we have to wipe and rebuild your machine. We’ll restore what we can, but here’s what happened that led us to do this,” to get more buy-in and really try and get everyone pulling in the same direction, that same-team mentality, and not trying to act as the bad cop. “You did something wrong, and your work for the last week is going to disappear as a result.”
Yeah, it’s an interesting thing because I think for a lot of us, I guess the lesson is that people are people no matter where they are. I would imagine, in our minds, many of us think of the team at Los Alamos as being an elite group of the best of the best. But at the end of the day, everybody’s human and subject to the same weaknesses and foibles, no matter what sort of work you’re doing.
Exactly, and especially if you’re in an environment here, you’re talking about very intelligent, elite people, world-renowned physicists and what not, that on a daily basis collaborate with persons in China or elsewhere across the world and are responding to emails and doing their thing. And if you’re not a security professional, some of the things that seem obvious to us just kind of fall by the wayside, or you’re not even aware of it from the get go.
For example, there is what I just mentioned, that you think going into this environment is like, “Wow, I’m going to be surrounded by people who are so much smarter than me. This should be an easy job, except for the bad people trying to break in. The people on the inside should be fairly easy to deal with.” And then there’s a combination of … Just because you’re an expert in molecular biology or astrophysics or something, doesn’t mean you’re an expert on even the basics of computer security, because no one’s ever taught you that. And quite honestly, I have yet to encounter a good information security training program that doesn’t make me want to pull my hair out. So there’s that part of it, where it’s like, okay, there still is an awful lot of education and a role for the people on that security team to interact and inform people who you would look up to and say, “Oh, there can’t possibly be anything I could tell these individuals. They’re so much smarter than I am.”
It’s an interesting thing that you think about. Many of these people have probably spent most of their lives being the smartest person in the room, and then when you cluster a group of those people together, that must make for an interesting social dynamic.
It is. You overhear really interesting conversations at the two or three local bars that are in town after work.
That’s great. So, take us through your career. You leave there, and it takes you, eventually, to where you are now at Dragos. What sort of work are you doing there?
Now, I’m focusing on what would commonly be called “threat intelligence work.” That’s how Dragos is organized. We have an incident response team, development and platform team, and then the threat intelligence team, which is where I am — except threat intelligence seems like it’s an abused and sort of a murky term these days. So, while that makes sense from a job title perspective, that everyone can kind of get an idea for what it is that you do here, I don’t look at it in quite the same way. Because in looking at the threat intelligence problem or the role, really, what we’re trying to do … What I do at Dragos is trying to find ways to empower and improve the knowledge and ability to respond for defenders. That at the end of the day, if I produce something that does not provide at least one actionable idea to improve defense in some measurable, concrete way, I’ve done a bad job. I produced a book report. Which — maybe that’s entertaining and interesting — but it doesn’t help anyone. There’s certainly different audiences where maybe that is important. Thinking about like, “Here is the general trend for what we expect for the next five years in the oil and gas industries,” since we’re an industrial control system-focused company.
But really, we’re trying to inform the people who are operating the plant environments, who are designing the day-to-day security operations. And the other side of that, too — it’s not just trying to spit out a list of indicators of compromise, because indicators of compromise are of kind of limited value, at least in my opinion. But really, trying to find that sweet spot between, here are the broader lessons that you can draw from what we’re observing — either in intrusions that are underway, things that we’ve been able to analyze secondhand, or things we’re picking up in the environment — and here are the behaviors that are emphasized in what we’re seeing and how to find them, how to prevent them, and how to respond to them. We’re really trying to tell a story around what’s going on that abstracts away from the very specific, to the event in question. Here’s an md5sum. Oh, that’s great. Don’t see that md5sum again. What’s the big deal? Instead, here’s what that md5sum means. Here’s the piece of software behind it or the script behind it. Here’s how it functions. Here’s how it lives on disks. Here’s how it looks in memory. Here’s how it talks over the wire, and these are all of the different touch points that maybe you don’t see in conjunction ever again, but you might see one or more of them in a completely different context, either reused or independently arrived at by another entity.
So, here’s how to start looking for that and responding to it. To really build up … I call it threat indications and warnings. The tactical information that can really build up a response.
But really, the main goal, the philosophy underlying what it is that I do and how I think we operate as a company, is enabling the defender to respond to and anticipate the next attack by learning what we can from the last one. Not to throw shade or to criticize others, but I think as an industry, threat intelligence becomes a combination of book reports on “Here’s What Happened at Institution XYZ,” combined with malware analysis reports, which can be interesting, and an over reliance on indicators of compromise as the supposed actionable information. And the result of that is, you lose both context around what happened in past incidents, and you don’t get a really good feel or a really good mechanism to respond to the next incident.
So, what we’re looking to do is build an adversarial behavior awareness for what attackers are trying to do, or going to do, so that defenders are instead armed with an understanding of, like, “Well, here are the different ways an adversary can attack me, and different ways that they can try and entrench, and different mechanisms for trying to exfiltrate data.” By building awareness for that, I can build more holistic controls instead of just playing IOC whack-a-mole.
It’s a really good point you bring up. It’s something that comes up a lot on this show, as we focus on threat intelligence, is that importance of translating information, that raw firehose of information, into actionable intelligence. It seems like part of what you’re saying is, the trick is to make sure that intelligence is actionable. In order for it to be useful, it must be actionable.
Mm-hmm. And traditionally, people look at actionable, I think, in a very narrow way. That if you haven’t given me a domain, an IP, or a hash value that I can blacklist or throw into my SIEM and alert on, you haven’t given me something actionable. And I think that view is ridiculous. There’s certainly the threat intelligence Twitter mafia out there that will say if you haven’t given me IOCs, then get out of here. You haven’t given me any information. Well, I’m sorry I’m not enabling you to go pull samples from VirusTotal to repeat my analysis. You’re not my audience. My audience is the defender. Especially when you look at … I have a multitude of potential customers. I don’t know what equipment they’re running in their environment. I don’t know what visibility everyone has in their environment. So, I have to play to the broadest possible audience with enough different ways of trying to identify a problem. As much as it seems frustrating from machine to machine — which I know is where everyone wants to really be, automating all the things — unfortunately, from an intelligence perspective, you can’t do that while being able to arm, equip, and enable as many people as possible.
So instead, what I try to do is approach the narrative and behavioral description approach of outlying. Here are how the specifics of this attack work, here are your IOCs, et cetera, but here are the fundamentals for what this looks like on a behavioral level, and here are the different ways that you can look at it or try to identify it. If you’re running fancy EDR products, or you have really good host visibility because you’re running SYSMOD, or something like that, here are your options. If you’re only doing Windows event log capturing, well, here are your options there. It’s not as good, but here’s how you can get closer to the problem. And when you start talking ICS environments, well … Maybe you only have network visibility. Here are the different artifacts you might see at that level that get you at least some perspective of what’s going on and try and outline as many options, as many possibilities that go after the underlying behaviors and the fundamental actions that the adversary is pursuing so that you can get the most visibility for your buck, even if you’re in a very constrained environment for what the defender has available to them in order to see and spot these things.
Now, you all function within the ICS community. That is your focus. It strikes me that part of the value added there is that if I’m keeping my electrical power plant up and running, I’m head-down focused on the things that it takes to do that. But you all are looking at — even within that world of ICS — you’re looking at electric. You’re looking at water. You’re looking at nuclear. You’re looking at wind. So, you can provide the expertise, the things that you’re learning from what the attackers are doing over there in water, that me, over here in electric, might not have a view on. I wouldn’t get a view on that. But you’re able to provide that bigger picture of what’s happening all across the spectrum of ICS infrastructure.
Yeah. I think that’s a very important thing for us to do, because within a single company, every plant environment is almost its own unique and beautiful snowflake. When they’re built, they have whatever equipment that was purchased at the time, different levels of whatever firmware or whatever hardware version things are running in. So, every environment is, to a certain extent, unique. But instead of looking at that as a problem, there’s still, at the end of the day, very fundamental actions that any attacker will need to take that any defender should prepare against to take some action within that environment. The adversary needs to get into the plant environment when it’s bridging from the business IT network, or some unfortunate direct-access method, because you have a PLC hanging out there on the internet or something ridiculous like that. And then once there, they need to entrench. They need to gather data. They need to survey the environment. They need to move tools into the environment at some point in order to really start taking an effect and identifying all those general touch points that are applicable in essentially every environment, so that you can build a description, a picture that’s applicable across oil and gas refining, electric generation, electric distribution, manufacturing. Because getting too bespoke really limits yourself to what sort of actual information I can provide.
If I provide something that is extremely narrowly tailored to just this one environment by focusing very much on how this one piece of malware was tailored, yeah, I might be doing something beneficial for that original targeted organization. But I might not be providing very good information, or certainly not very actionable information, toward this other plant even within the same industry, let alone something that’s in an entirely different ICS vertical.
Now, for those folks who are just getting started with threat intelligence — they’re in that mode where they’re shopping around, they’re trying to decide what part it’s going to play in their security posture — what advice do you have for someone like that? What’s the most effective way for them to figure out what works for them?
So, one of the first things is really getting an understanding of, who am I, and what do I need? I refer to this as “threat profiling.” When I say that, it’s an understanding of, what does my own environment look like? What do I have? What can I see? What’s of value in my network, both from an instrumental standpoint … Like, okay, here are my domain controllers, and what sort of software and hardware am I running? As well as from a business value proposition — I have intellectual property, I have customer data, I have healthcare data. Those sorts of sources of value within the organization.
And then, from that, start thinking about, “Well, what would a bad guy want to do to me? How am I a target? What sources of value would someone want to compromise?” Whether that’s something as indiscriminate as ransomware or cryptocurrency mining, to, what are the target attacks that might be of value? Does someone want to steal my intellectual property? Do I have financial value that someone could try to extract? And based upon that combination, from the internal and potential external view, what questions do I have, and who’s going to start answering them? And that really fuels the, “What do I need from an intelligence perspective?” question.
Once you start approaching the problem from the standpoint of competing values or from … How would an adversary try to extract value from my organization, who is capable, and what sort of questions need to be answered for me to improve my own positioning to protect those sources of value? Whether that be through the instrumental things like what an adversary needs to compromise a route to sources of value or those sources of value itself … By piecing that together … It’s not the easiest of things, but to borrow a term that was popular for a bit and then seemed to come under a lot of derision, it’s the idea of good cyber hygiene.
It’s just a good exercise, going through this step, to get a feeling for … like, what does my own network environment look like? Who might I think might be interested in compromising that environment? And building up a picture from there to get an understanding for what your threat intelligence needs are, your knowledge gaps, and your visibility gaps that need to be answered.
Our thanks to Joe Slowik from Dragos for joining us.
Don’t forget to sign up for the Recorded Future Cyber Daily email, where every day you’ll receive the top results for trending technical indicators that are crossing the web, cyber news, targeted industries, threat actors, exploited vulnerabilities, malware, suspicious IP addresses, and much more. You can find that at recordedfuture.com/intel.
We hope you’ve enjoyed the show and that you’ll subscribe and help spread the word among your colleagues and online. The Recorded Future podcast team includes Coordinating Producer Amanda McKeon, Executive Producer Greg Barrette. The show is produced by Pratt Street Media, with Editor John Petrik, Executive Producer Peter Kilpe, and I’m Dave Bittner. Thanks for listening.