Podcast

Questions to Ask When Shopping for Threat Intelligence

Posted: 2nd April 2019
By: ZANE POKORNY
Questions to Ask When Shopping for Threat Intelligence

Our guest today is Brian Martin, vice president of vulnerability intelligence at Risk Based Security, a company that provides risk identification and security management tools leveraging their data-breach and vulnerability intelligence.

Brian shares his experience turning data into meaningful, actionable intelligence, common misperceptions he’s encountered along the way, and why he thinks companies shopping around for threat intelligence need to be careful to ask the right questions.

This podcast was produced in partnership with the CyberWire.

For those of you who’d prefer to read, here’s the transcript:

This is Recorded Future, inside threat intelligence for cybersecurity.

Dave Bittner:

Hello everyone, and welcome to episode 101 of the Recorded Future podcast. I’m Dave Bittner from the CyberWire.

Our guest today is Brian Martin, vice president of vulnerability intelligence at Risk Based Security, a company that provides risk identification and security management tools leveraging their data breach and vulnerability intelligence.

Brian shares his experience turning data into meaningful, actionable intelligence, common misperceptions he’s encountered along the way, and why he thinks companies shopping around for threat intelligence need to be careful to ask the right questions. Stay with us.

Brian Martin:

I got my start in computers at a really early age, I don’t know. Seven, eight, nine, on a TRS-80. Worked up through a Commodore 64, into the world of, I guess it was Intel back then, the 386. But it was in the early 90s, ’91, ’92, that I got my first modem, found BBSes and sparked that journey. That’s probably where it starts for the most part.

At that point I was in college. I got my first account on a VAX/VMS system that was connected to the internet, a wide world of interesting stuff. This was before the Web existed. Shortly after, I guess ’92, ’93 range, moved to Denver, became a member of a hacking group up here. We broke into systems back in the day, back when it was a matter of discovery that you couldn’t Google everything, you didn’t have manuals, you didn’t have 87 operating systems to install through VMware. So it was more hacking about discovery and learning about the computer systems out there.

In ’96, got my first job as a professional pen tester. The year before that, I quit the hacking cold turkey as I was trying to get that real job. Throughout that phase and then since, for various reasons, I was collecting vulnerabilities. In the 90s it was so that we could break into systems, and in ’96 and on for the next 13 years, it was to have a catalog of vulnerabilities and exploits to look for on penetration tests. I eventually joined OSVDB, which is the Open Sourced Vulnerability Database. It ran from 2003 through I think 2011.

I was, for a while, about the only one that was hardcore working on it, 40 hours a week in addition to my job. But that whole time, aggregating vulnerabilities, and eventually OSVDB, we determined was not sustainable. We weren’t getting any of the community help and input we needed. A lot of companies and individuals were using our work for profit and not following our license.

So, we basically closed it down and started up a commercial iteration, which turned into Risk Based Security. Now it’s one of our two offerings today. So, essentially going on, I guess, 25 years, most of which I aggregated vulnerabilities.

Dave Bittner:

I wonder, for folks who have … Who got their start back in those early formative days when that sort of access to computers was not something that everybody had. Do you think that informs how you approach things today? Do you think that gives you a different perspective, a different way of thinking about things?

Brian Martin:

In many ways, it absolutely does. I would say it certainly changes my perspective. Growing up where you had to learn how to program using what was almost a brochure, and then going to a store to find a magazine that had programs printed out and you had to transcribe them and type them in yourself, and debug them, yeah. I think that that really makes me appreciate what we have today and the convenience and the ability to quickly just google and find information, or to say, “Hey, I want a program that does this,” and instead of writing it, just go search GitHub or whatever.

I would say that anyone coming from that era probably has a different approach.

Dave Bittner:

It seems to me like a lot of folks from that era have a really strong set of problem solving skills, they’re very resourceful.

Brian Martin:

I think so, yeah. I want to say that that contributed to my industry experience, where I have always been a jack of all trades, master of none. Because back then, it was a little bit of programming, a little bit of the operating system, a little bit of hardware, a little bit of this. And then even getting onto the internet in ’92, I guess, having to figure out just enough about VAX/VMS to get onto what we considered a better machine, a UNIX system.

And then certainly for the hacker mindset, having to figure out how to determine what protocols were there before you had a port scanner, how to figure out what the protocol was before you had debugging tools, really. Yeah, it was a different experience for sure, and even moving into the pen testing era from ’96 on, like I said for the next 13 years. Then we saw a flood of tools come out that made life a lot easier in some ways, but it also made it more difficult in others, because the expectations of your testing grew with each year.

It wasn’t just, “Oh, find the open ports and look for known vulnerabilities.” It was start to find new and unique vulnerabilities. Start to find trust relationships, this and that. So definitely a fun adventure though.

Dave Bittner:

Describe for us Risk Based Security. What are the offerings that you have, and what’s the range of services you provide?

Brian Martin:

We’re actually a fairly niche company for our two major offerings. The one that I’m most familiar with and work on is called VulnDB, which is basically just a commercial vulnerability database. They sound simple on the surface when you talk to someone, they’re like, “Oh, well you just read mail lists and you collect vulnerabilities.” Yeah, that’s a beginning, but it goes way beyond that.

So we’re not only aggregating the vulnerabilities, but we’re doing some sanity checking, making sure they’re legitimate. We’re wrapping a whole lot of metadata around it, we’re doing standardized vendor product versions. We track creditee, which is academic at this point but still interesting. We track seven different dates for every vulnerability, if it’s available. And we’re constantly adding more fields, more classifications, more tie-ins to NPM modules and different sources.

The other side we do is called CRA, Cyber Risk Analytics, which is basically data breach aggregation. That’s another one that sounds easy on the surface, but once again, we wrap a lot of metadata around it. We have to make sure that we understand what data was compromised, how many records.

We’ve also, for seven or eight years now, been doing freedom of information act requests against … At the state level, because most of the states have some kind of mandatory breach disclosure. But some of them don’t have a consolidated law that says, “Well it has to be this office or that one.”

So there are literally states that say, “Well you have to report it,” and leave it at that. We have to actually make requests against maybe three to five different agencies and say, “Hey, do you have any breaches reported? Can you please send them?” So there’s a lot of work around that.

The good thing on both sides is that, with that extra effort, we have a lot better view into the world of vulnerabilities and data breaches.

Dave Bittner:

How does someone who is using your products and using your services, how does that fit into their overall spectrum of ways that they’re protecting themselves?

Brian Martin:

Good question. So in my mind, we have three general types of customers. The first one is probably the more traditional where you have a large organization, say 10, 20, 30,000 employees, maybe a million in-points or devices or whatnot. And they use our vulnerability intelligence to essentially secure their systems. So they’re looking for, “Well, we have this product. There’s a vulnerability in it. We need to patch it or upgrade it. Or there’s no solution yet? Maybe we need to isolate it on the network or do increased monitoring there.”

We also have a second type of customer that is more focused from a development angle. We’re talking large software shops that basically put out software used by tens, or maybe even hundreds of millions of people. Their products may use 100 or 200 libraries, third-party dependencies. So they rely on our intelligence to say, “Well hey, wait a minute. This third-party dependency has a vulnerability, it might affect our product as well.”

And then the third type is security providers. One company is a vulnerability scanner writer, and they use our vulnerability feed to determine what plugins to write for their scanner. And that gives them coverage well above and beyond anyone else in the industry, essentially.

Dave Bittner:

Do you find that there are common misperceptions that people have when it comes to how to go about best using things like vulnerability databases?

Brian Martin:

Absolutely. Even the mature organizations that approach us, they’ve got some great security practices, they’ve got a lot of discipline, they’ve got a lot of institutional knowledge, which is great to see. But even those organizations will often start out with, “Now that we’ve seen your data, wow. How do we actually effectively use this?”

We’re not a software company, we’re just the intelligence provider, but we give them some advice based on prior customers and our knowledge of what kind of data we have. In many cases, these companies, they either have to stand up a new system, a new team, or basically integrate it into their security life cycle. It can be a little challenging on their side, but once they do, generally speaking, we’ve had almost 100 percent customer retention since we started in 2011.

Dave Bittner:

That’s interesting. So there’s, I guess, as with any tool or new process, there’s a bit of an onboarding process of getting them up to speed and making sure that they understand and are using the tool in the best way possible.

Brian Martin:

Right. So one of the challenges that these companies typically have is, they had, prior to us, they had been using some vulnerability intelligence somewhere. A majority of them were using CVE, which is run by Miter, or using it through NVD, the National Vulnerability Database, which are identical data sets but NVD adds some metadata to it.

When they move to us, they realize, “Wait a minute. You have almost 66,000 more vulnerabilities than NVD does. We need to reevaluate how we approach our security.” It wasn’t just about patching … It no longer is about just patching those machines that they commonly had to. Now they’re realizing, “Wait a minute, there are vulnerabilities in our IoT and this and that. Even our slide projector in the boardroom.”

I guess it’s a moment of realization to them that, yes, they knew vulnerabilities were prevalent, but now they actually have the data to prove it. Like I said, it’s eye opening to most of the companies coming into our offering.

Dave Bittner:

I would imagine also that this sort of information is helpful for the folks on the tech side of the house, to be able to take that data to folks in the boardroom. Brian Martin:

Absolutely. And again, that’s definitely another place where it’s eye opening. If the executives weren’t involved in the purchase decision, sometimes it’ll be a case where they’ll get a monthly report and it’ll say, “Well, we triaged X amount of vulnerabilities.” And the next month, the report says, “Well, we changed providers. We now had to triage twice as many vulnerabilities.”

That can be a shock to not only the tech side, but to the executives as well, where they have to reconsider, “Wait a minute. Are we throwing enough resources at security? Can the current process be modified so it stays efficient? Or do we need to bring on a few more people to help this process along?”

Dave Bittner:

Getting over that initial “aha” moment that there’s … Perhaps there have been a lot of unknown unknowns until you get to see the full spectrum of what might be out there.

Brian Martin:

Yeah, I would say that most of ours fall in the known unknown category. They know there are other vulnerabilities out there, they just don’t know what they are. Then they get our offering and they’re like, “Okay, the floodgates open. Now we have a lot better picture.” And then with that data, they can also, in our portal, basically do statistics and do a historical picture.

One of the things that we’re fond of is reminding them that we aggregate vulnerabilities regardless of the date, because we want to see that whole timeline. We want to see that a vendor has gotten better or worse about responding and mitigating vulnerabilities. We want them to be able to better determine what we call cost of ownership, that, “Well, this program has more vulnerabilities, but the vendor is a lot more responsive. The upgrade cycles are very easy. Whereas this other vendor, they have fewer vulnerabilities, but they go a year between releases. So we have software that’s sitting vulnerable for up to a year.”

It can help guide purchasing decisions, or especially in the world of libraries, they can say, “It looks like this library has been abandoned. There hasn’t been a fix to a security vulnerability in six months. There’s another fork of it, and it seems like it’s being actively maintained on this fork. We should probably move.”

Dave Bittner:

One of the focuses of our program here is threat intelligence. So I’m wondering, what is your take on threat intelligence? What part do you think it plays in people defending themselves?

Brian Martin:

This might be a little contrarian, but I really dislike the phrase “threat intelligence.” That’s primarily because of the way it’s used in our industry. There are a lot of companies that will say, “We provide threat intelligence.” I say, “Okay, great. What does that mean to you?”

That’s when they have to qualify it or disclaim it and say, “Well, we do IP-based intelligence, or we do threat actor intelligence, or we do binary analysis.” I’m like, “Okay, great. You do those three types of threat intelligence.”

Threat intel is this big umbrella. So for us, we provide two types of threat intel — data breach and vulnerabilities — and that’s all we do. I see threat intelligence generally speaking as a critical part of any organization’s security system or policies or how they’re going to manage and triage. But I think what’s the most important is that each company needs to determine, based on their size, where they are, how mature their security process is, which of those is going to be the most impactful to them.

There are companies out there that can certainly use vulnerability intelligence, but they may not be in a position to adequately or efficiently use threat actor intelligence. They may not care who’s attacking them if they can just keep their system secure.

Dave Bittner:

As you point out, threat intelligence can be such a big umbrella that folks who are out there shopping around for it need to know what questions they need to ask.

Brian Martin:

Right. And they need to know what kind of intelligence they’re after, and then if any company says, “We provide it all,” they need to be very, very skeptical. They need to do a long evaluation, 30-60 days at least. They need to see what that data looks like. They need to have their teams internally say, “Okay, can we consume this data? Great, we can. Now, can we effectively use it or not?”

Dave Bittner:

Yeah, because I mean, that’s the whole thing, right? I mean, what good is threat intelligence if it’s not actionable?

Brian Martin:

Right. I’ve known a lot of companies and I’ve known some of the intelligence providers that have sold these subscription feeds for hundreds of thousands of dollars, and the organizations bought into it because hey, they heard that was the right thing to do. A year, two, five years later, they realize, “Wait a minute, we are not really using this data. It’s not helping us that much.” And they have to reevaluate that.

Dave Bittner:

It’s an interesting cautionary tale, I think.

Brian Martin:

It is, and that’s another reason that when you go to an intelligence provider, is that you not only evaluate the data that you’re receiving, but you evaluate the support. If you have a question about an IP, a threat actor, or a vulnerability or a data breach, and you go to them, do you get a response? Is it helpful? Is the company willing to work with you on a one-off situation like that? If they’re not, that should be a big warning sign to you as well.

Dave Bittner:

Where do you think we’re headed with these sorts of things? When it comes to tracking vulnerabilities, where does the future lie?

Brian Martin:

It’s hard to say. We’re actually approaching 200,000 vulnerabilities in our database, which we will probably hit early next week. That’s a big number, and we’ve actually been discussing it internally saying, “Well, when we hit 200,000, then what?” Sure, we want to write a blog about it, but what’s there to say other than it’s 200,000 and that’s just a notable milestone?

One of my first thoughts was, “Okay, well it’s 2019 and we hit 200,000. Based on the pattern, when are we going to hit 300,000, or 500,000? What happens when we get a lot more efficient and accurate software that can audit code?” I mean, we have some out there right now, there’s several providers for it. But what happens when that software is applied at scale, say to 90 million GitHub repos?

Currently, GitHub has a security tool that will point out vulnerable dependencies, which is largely based on CVE. So you’re getting some good information, but you’re missing a lot of others because CVE, they don’t actually focus on third-party libraries. It’s basically whoever goes to them and makes the request and says, “I need an ID.”

Whereas we go out of our way to look for those third-party library vulnerabilities, because a lot of our customers want that. So in our minds, it’s like that software is applied, now you’re scanning even 50 million GitHub repos. What happens when the vulnerability count goes from 20,000 in one year, to 50,000 in one year? Or 100,000 in one year? That kind of scale is pretty scary. The current state that I see the security or the larger tech industry is at, is that a lot of these organizations, even the big ones, don’t necessarily have a good system in place to effectively monitor all of their assets and be able to map to those vulnerabilities.

A lot of these organizations, some huge ones, are still relying on network scanning to find all of their vulnerabilities. When you’re talking about a million in-points, or two million in-points, even with dozens or hundreds of scanners you’re talking week or two week cycles, to look for those vulnerabilities. Whereas if they went to an asset inventory system and tied that into a vulnerability intelligence feed, then it would be as the vulnerability is published, they instantly know, “Well, we have 75 machines we have to patch. But wait, this may or may not affect us. Let’s investigate. Oh yes, it does, okay, get to patching. No it doesn’t, great. We will recast the score, we’ll mark it as a zero for our environmental impact, and we’re done.”

I think that moving forward, a lot of these organizations are going to find themselves having to fully reevaluate their entire process and their entire model for vulnerability detection and remediation.

Dave Bittner:

The point that you make there I think is an important one as well, in that, and it goes to your name of Risk Based Security, of it’s not just a matter of being aware of necessary patches, for example, and going out and doing them. It’s evaluating that to see where the priority needs to be. How quickly do I need to get to this based on the risk it represents to my business?

Brian Martin:

Exactly. So take Microsoft patch Tuesday, which everyone knows and loathes. The thing is that it’s no longer Microsoft patch Tuesday. Adobe, for a long time, has been releasing on the same day. We frequently see SAP, we see Google Chrome, we see Firefox, we see Cisco doing these huge releases on that same day.

So for an organization to get vulnerability reports saying, “Okay, well Windows, Windows server, Cisco, SAP, Chrome, Adobe, my god. Where do we start with this?” That’s where, as you say, it becomes which ones pose the biggest risks to us? That starts with something as simple as a CVSS score, which is great. But you also have to look beyond that to see if there are any caveats or little “gotchas” with the vulnerability. Maybe it only affects a certain type of architecture. Maybe this one requires authentication, and you have a great policy for credentials. Maybe that one is considered less risk.

As the vulnerabilities go up and as these companies are releasing more at one time, exactly. Organizations need to look at it with a little finer toothed comb and say, “Okay, today we can only remediate maybe 50 percent of what was released. Let’s quickly figure out which ones are the most important to us.”

Dave Bittner:

So what’s your advice for folks getting started with this? They’re trying to get a handle on how to better manage dealing with vulnerabilities, getting a handle on how they can evaluate their risk and act on it. What’s a good place to get started?

Brian Martin:

I think for me, the number one thing goes back to the asset inventory. The only way you can really understand your organization and what’s in front of you is to have a complete inventory of all of your systems, all of your … Basically anything with an IP address, anything with an in-point. You need to know not only how many machines or virtual hosts are on the network, you need to know what software is on there. When possible, you need to know what third-party dependencies those rely on, because you may get a notification about a vulnerability, let’s say in some third-party library and you’re like, “I’ve kind of heard of that, I think, maybe?” And in reality, that might actually be a library used in several of your internal applications.

So the better inventory you have, the better you can respond to those vulnerabilities when they come in, because then they are quickly mapped to all of the assets in your organization. In my mind, that’s probably the best place to start, because we still hear about horror stories where a new admin joins a team and they run a vulnerability scan and they say, “Hey wait a minute, what’s this IP address?” And the rest of the team’s like, “I don’t know. We didn’t know that was on the network.” And they have a box just floating out there on their network that hasn’t been touched in a year or five years.

You hear these stories from time to time, and you’re like, “Hey, wow, that’s kind of funny. A five-year-old unpatched box.” But not so funny to an organization that’s trying to maintain a secure posture.

Dave Bittner:

Our thanks to Brian Martin from Risk Based Security for joining us. Don’t forget to sign up for the Recorded Future Cyber Daily email, where every day you’ll receive the top results for trending technical indicators that are crossing the web, cyber news, targeted industries, threat actors, exploited vulnerabilities, malware, suspicious IP addresses, and much more. You can find that at recordedfuture.com/intel.

We hope you’ve enjoyed the show and that you’ll subscribe and help spread the word among your colleagues and online. The Recorded Future podcast team includes Coordinating Producer Zane Pokorny, Executive Producer Greg Barrette. The show is produced by the CyberWire, with Editor John Petrik, Executive Producer Peter Kilpe, and I’m Dave Bittner. Thanks for listening.

Related