Podcast

How to Empower Teams With Threat Intelligence

Posted: 18th June 2018
By: AMANDA MCKEON
How to Empower Teams With Threat Intelligence

In this episode of the Recorded Future podcast, we examine how threat intelligence applies to a variety of roles within an organization, and how security professionals can integrate it to empower their team to operate with greater speed and efficiency. How does threat intelligence apply to SOCs, to incident response, or vulnerability management? And how do corporate leaders make the case that threat intelligence is a worthwhile investment?

Joining us to address these questions is Chris Pace, technology advocate at Recorded Future.

This podcast was produced in partnership with the CyberWire and Pratt Street Media, LLC.

For those of you who’d prefer to read, here’s the transcript:

This is Recorded Future, inside threat intelligence for cybersecurity.

Dave Bittner:

Hello everyone, and thanks for joining us for episode 61 of the Recorded Future podcast. I’m Dave Bittner from the CyberWire.

In this episode of the Recorded Future podcast, we examine how threat intelligence applies to a variety of roles within an organization and how security professionals can integrate it to empower their team to operate with greater speed and efficiency. How does threat intelligence apply to SOCs, incident response, or vulnerability management? How do corporate leaders make the case that threat intelligence is a worthwhile investment? Joining us to address these questions is Chris Pace, technology advocate at Recorded Future. Stay with us.

Chris Pace:

Those of us in security and information security … I think there are some misunderstandings about, first of all, what threat intelligence actually is, and also about the ability to use it as part of security operations, or in a wider information security strategy. A couple of those would be … First of all, that it’s prohibitively expensive, and second of all, that it doesn’t give you the kind of fidelity you need — the sort of detail that you need to really be able to use it to make decisions. Because of that, it’s time consuming.

And then, perhaps the biggest misconception is that it’s going to be somehow invasive or time consuming to implement, or is going to suck up a load of your resources and your investment. And really, the idea of this report is to outline that none of those things is necessarily true, and to give some really clear indications of where organizations can start to use threat intelligence right now, immediately. They can start to see gains in particular roles in security.

Dave Bittner:

Let’s back up a little bit and clarify that definition. How do you define threat intelligence?

Chris Pace:

I think we’re in a place within the industry where there are already confusing terminologies around what threat intelligence is and isn’t. That’s why we have tried be as explicit as possible in the report and outline that threat intelligence isn’t just a data feed. It isn’t just a stream of data that’s open source. Although they’re easy to digest and have the potential to bring an extra layer to security operations, there are inherent challenges in using feeds that are not intelligence.

Really, the most important point is that they don’t have any kind of context. A feed is just a load of non-prioritized binary information. So, while there’s some value in there, some indications of what’s bad and what’s good … Actually, in order to add context to those to really make them intelligence for the security professional, it can actually end up consuming a lot of their time and energy, and then we are back to the situation where we already are, which is where we have alerts that don’t get actioned and triaged, and then get missed and are open to the risk of breach.

Dave Bittner:

So, let’s go through the report together. One of the segments highlights threat intelligence for security operation centers. What is the information you want to emphasize there?

Chris Pace:

We already know that those who work in the SOC — those analysts — they’re some of the busiest security professionals there are, and that’s largely to do with alerts. So, although the original idea of the SIEM was to streamline the process for security operations and identify stuff that might be anomalous or risky, and then be able to action that … Actually, we’ve ended up in a place where alerts from those systems are now piling up. They’re demanding their attention, and actually, in lots of cases, it’s not really possible for them to process alerts as quickly as they appear.

And then, when they actually get into the process of investigating these alerts, analysts then face the challenge of internal data that they can’t necessarily interpret in isolation, so they might be then manually searching across a whole variety of threat data sources looking for insights that are relevant to those particular alerts. There’s also a risk that while an alert might appear important at the beginning, it then turns into nothing, and they wasted a whole ton of time trying to find out what that thing was.

So really, anything that helps these guys work faster, more efficiently, and most importantly, with more confidence, is a massive advantage. I think, certainly, we would say this is one of the areas in which threat intelligence has an opportunity to make a huge transformational impact within security operations, and maybe that’s not happening right now because there’s a misunderstanding about what this kind of intelligence really is. So really, it’s all about using intelligence — using real, contextualized intelligence to be able to combat that alert fatigue and enable them to enrich that information that they’re seeing internally with useful external intelligence that can help them make a truly risk-based decision. That’s the ultimate end goal.

Dave Bittner:

When people are resistant to this, what are their fears? What do you have to overcome, and are those fears justified?

Chris Pace:

Certainly, the biggest fear will be, “We’re just adding another stream of data and we already have guys who are struggling to deal with the alerts that they’re seeing. They’re struggling to investigate these alerts in an effective way, and actually, are you really telling me that adding another stream of data is going to help to solve that problem?” Our answer to that would be, “No.” I would say, if they view threat intelligence as another stream of data, then yes, they have a legitimate concern. But if they can begin to see threat intelligence — first of all, as contextualized — so that means relevant to their organization, consumable by either the systems that they use or by the analysts that need to use it, I think their view on that will probably change.

The other important thing is that the consumable nature of intelligence means that they can work with it quickly, so they should be able to see, at a glance, “Is the information I’m seeing here relevant? Can I make a decision based on risk from what I’m seeing in front of me?” If that process is sped up, then I think it really is a strong argument against a feeling that adding threat intelligence can only be a hindrance.

Dave Bittner:

Let’s move on and talk about how you apply threat intelligence to incident response.

Chris Pace:

One of the things about incident response is, time is a big factor here. Not only is incident response a critical process when it comes to working to respond to a breach and identify where it came from, and then do what needs to be done in order to remediate against the particular threat — that’s the whole process — but actually, the time is absolutely critical. In fact, time in security generally is everything, and the risk is that the process becomes time consuming when there are a lot of manual processes involved with uncovering the artifacts — I guess is the word that would be used in the industry — uncovering those artifacts that need to be investigated in order to ascertain the nature of a breach.

And actually, the longer that that takes … So, the window of risk widens, and that’s really an area, again, where analysts are overwhelmed, where they’re struggling to deal with the demand, the effort required to research and respond to each incident. There is that increasing time lag between initial detection of the incident and the response. Again, by giving the analysts the insight that they need, it can speed up the response time, and also, it means that if you’ve got analysts who are perhaps a lower level or earlier in their career, it helps them begin to learn how to use intelligence to effectively research and respond to breaches. That’s a big deal, potentially, when we’re facing the kind of skills gap that we are for the people in these roles. Augmenting the personnel that you have with consumable intelligence can actually help to amp up their skills as well, so they become more proficient. Access to that intelligence is giving them, first of all, more experience of the nature in which these attacks are happening, but it also has enabled them to upskill.

Dave Bittner:

And it really highlights that difference between raw information and intelligence.

Chris Pace:

Yeah. We know that, obviously, transforming raw data into information or intelligence, as you mentioned, is the big challenge when it comes to threat intelligence, anyway. What we have identified, certainly, is that what machines are very good at is the heavy lifting when it comes to collection of data from varied sources, and also where it comes to actually transforming that data into something that’s useful for a human being, or can be ingested by a machine.

So actually, the job is not to remove the human element. The analysis of some kind will always be needed. There will always be a requirement for the human being to do what a human being does best, which is to make a reasoned, rational decision based on the information that they have available.

Today, machines can’t do that. Now, depending on who you talk to, there’s a day coming when machines can, but for now, machines cannot do that. So, what we task the machines with doing is that heavy lifting — the collection, the initial analysis, the structuring of unstructured data that can turn it into intelligence. The big advantage there is that it happens in real time, so it means that when an incident responder, for example, is looking for relevant intelligence to help them make a decision, if there are changes that are happening around that intelligence, they will see that as and when they happen. They have two advantages: They have the advantage of the machine giving them the lion’s share of the information they need to make a decision, and they also have the speed at which the machine is able to work.

Dave Bittner:

When an incident does occur, it seems to me like you’re putting your people in a position of being informed, of not feeling around in the dark for what might be happening, but having context.

Chris Pace:

Yeah, and that has two prongs. One prong is, they can respond more effectively to an incident, but the other prong, actually, is that they have initial visibility anyway because they perhaps have a better view of trending threats, or emerging threats, that they wouldn’t have had without this kind of information. They can then begin to narrow in on those trends and say, “Okay, show me the trends that are relevant to my industry or are particularly relevant to technologies that I use, or are even relevant to peers in my industry.”

It’s definitely those two prongs of, I guess we could call it prevention. So, kind of, visibility of what’s happening in terms of the threat landscape, and that will ultimately inform a way more efficient response if it’s necessary, when the time comes.

Dave Bittner:

And I guess it helps stave off that feeling of panic. We’ve got this under control, we know what’s going on.

Chris Pace:

Yeah, there’s that classic thing about, you’re wandering down the corridor toward the water cooler and the CSO or the CEO asks you about that particular threat from that morning. Actually, although that may seem like a reasonably trivial and small thing, actually having a view of the threat landscape, having an idea of the trends, understanding attackers — not just their methods, but also where their motivations are — that visibility all helps, as you say, to relieve that feeling of panic when an incident needs to be responded to, so there’s value there too, for sure.

Dave Bittner:

Let’s move on and talk about applying threat intelligence to vulnerability management. What’s your advice when it comes to this?

Chris Pace:

Vulnerability management is obviously … Traditionally, the approach has been, basically, “You have to patch.” You have to find out what you need to patch, and then you have to patch it. But actually, sometimes the key concerns are, “How much of a security risk does this really represent?” And that’s balanced with, “If I have to remediate something, what potential does that have to affect my infrastructure? Is there a risk of me breaking something by taking that remediation action?” The way, traditionally, the approach is done is, you scan your network to find systems that have that vulnerable software, and then you identify whether you think it’s worth patching those systems, based on the limited amount of information that you have available.

What threat intelligence applied to vulnerability management really intends to do is, instead of focusing on remediating the highest number of identified vulnerabilities, instead, you want to really prioritize those vulnerabilities based on the greatest genuine security threat. Now, often, the scores provided by the official sources — they don’t take into account whether those vulnerabilities pose the greatest genuine security threat. By genuine security threat I mean, is that a zero day? Is that particular vulnerability a zero day, or has that vulnerability been recently added to a well-known exploit kit that’s rampant in the wild?

Actually, the official sources for disclosing vulnerabilities — like the National Vulnerability Database — they don’t take into account the nature of a vulnerability. Also, we know that there’s a lag between the reporting from those official sources versus references to those vulnerabilities appearing on all the other kinds of threat data sources that exist. In fact, recent research that we did showed that 75 percent of all disclosed vulnerabilities appeared online somewhere else — that is, before the National Vulnerability Database — about seven times faster before they were listed there.

Again, if you go back to what I was saying about time, that becomes quite a key concern. So, we’re saying that the method that we use for identifying whether vulnerabilities really pose a risk may not be quite as comprehensive as we would like it to be. That’s really where threat intelligence has a role to play, because if I’m able to see, for example, “Oh, this particular vulnerability has been mentioned in connection with an exploit on a dark web forum,” that changes my attitude to the nature of that vulnerability, versus what I might be seeing in the vulnerability database.

Dave Bittner:

You can actually get a sense of, are the bad guys gearing up to put this into use, or have they already put it into use, versus, this is just something that researchers in a lab have discovered and we don’t know if it’s being put out there yet.

Chris Pace:

That’s the difference between the kind of technical version of the word “exploitability” and the real-world definition of exploitability. In the lab that you describe, exploitability is how easy is it to use this vulnerability to do something on a system, whereas, perhaps our real-world understanding of exploitability is, “Has somebody built code or is sharing, perhaps, proof-of-concept code that shows this vulnerability being exploited?” Actually, those two things are extremely different, but one of them … The first one that I describe is what’s used to measure the risk from a particular vulnerability, whereas, what we would much rather do is measure risk based upon how likely it is that this thing is being exploited in the wild.

Dave Bittner:

Given that, it’s fair to say that because most organizations have limited resources, that allows you to prioritize how you’re going to go about your patching and your mitigation of these things, based on how that risk would apply to you specifically.

Chris Pace:

Absolutely. I think, also, it does potentially move us away from this idea that patching is the only solution to this vulnerability problem. You may reach a point where, if you felt that the risk factors were high enough, if the indicators of risk were high enough, that you may take a more preventative remediation action than that. We see, for example, where some companies and government organizations have made decisions to take down websites based on unpatched vulnerabilities. They could see, from threat intelligence, that they had a vulnerability that was likely to be exploited, so instead of waiting for the patch to arrive, they proactively remediated.

Now, the argument might have been some time back. Why would you do that? You would never stop your business operations, but I think we are reaching a point where boards of businesses understand the potential impact that a breach — particularly of customer data — has, so now, they can use these risk factors to make reasoned decisions about how to respond. Sometimes that might be a patch, but other times it might be another form of remediation that perhaps is seen as being balanced in view of the risk.

Dave Bittner:

Now, speaking of boards, let’s talk about CISOs and leaders in an organization. How do they go about making the case to the “powers that be” that this is something that they should invest in?

Chris Pace:

Obviously, these are guys in organizations that have a tremendous amount of responsibility. Their job is often to act as intermediaries between security functions and the board, and actually, that whole role ultimately boils down to management of risk. How can they verify what risk really is, and then do a better job of managing that risk? Of course, cyber risk in particular — and maybe uniquely when compared with other kinds of risks that those CISOs might have to deal with — is not static. It changes all the time. It depends on the latest vulnerabilities, and it depends on how threat actors may be changing either their targets or their methods. It could depend on the political landscape. It could depend on how the industry that that company is in is viewed at that particular time.

That risk is a moving, morphing thing, and that means that getting timely and useful information about risk, whether that’s posed by an emerging or an ongoing cyber threat — that’s a massive challenge. That’s where we really believe that threat intelligence — the ability to give that real-time picture of the latest threats, trending threats, particular events, and the context that’s necessary to calculate the risk that a business potentially faces — can really make a big impact and can actually enable you to communicate the potential of that impact to other business leaders, or the board. I think we all recognize that a challenge that has come to CISOs in that particular role is communicating some of this more technical information and boiling that down to what the risk that we really face is. It’s a really, really tough job.

I think they’re also trying to do that whilst balancing the budget they have with the most effective kinds of investments. While CISOs have more money, there’s a possibility that they still lag behind what’s really needed, and you can’t buy everything. You can’t buy every single security solution that there is out there, and that means that you need to choose what you’re buying really wisely. Again, it seems to us that the only logical way to make investment decisions is to look at risk. What is your organization’s risk profile, based not just on internal information that you have about your assets or the nature of your brand, but also based on the industry that you’re in? Also, looking at trends, looking at threat trends that are directly relevant to your kind of organization and then being able to justify those areas of investment, that just becomes a much, much simpler process.

Dave Bittner:

Yeah. It strikes me that even if the higher-ups in a company at the board level may not understand all of the technical details, risk is something that they do understand, and to be able to contextualize that risk, that must be a powerful tool.

Chris Pace:

Yeah, of course, because at the moment, if you aren’t looking at it from the point of view of what’s happening outside your organization — that is, using external intelligence to identify where risks really are — then it just becomes a bit of an inside job. I mean, using internal data has some value, but it will never give you all of the context to really build a comprehensive view. One of the analogies that’s used in the report is, If you’re not using threat intelligence to inform risk, it means you could be spending a fortune on bulletproof glass while threat actors are building a death ray. There’s just no correlation between the attacks that you’re likely to face versus the defenses that you’ve decided to put in place.

It would be my argument that you can’t solely rely on internal information about your company to give you that insight. You have to have some understanding of how you’re potentially being targeted, or how you’re potentially being attacked, or the breaches that your peers have experienced, for example, to be using all of that information that’s available to make those kinds of decisions.

Dave Bittner:

For the organization that is considering going with some outside threat intelligence, that’s getting started and maybe isn’t sure how to do it, and maybe is a little concerned about taking on this new thing, what’s your advice for them?

Chris Pace:

The best place that you can begin is by defining where you think you will see, first of all, the biggest return on your investment, but also the biggest advantage to your security strategy. I can give you a couple of examples of that. Let’s say you’re in a place where you’re struggling with managing vulnerabilities. You worry that you’re opening yourself to risk. You can get an immediate advantage from being able to contextualize the information that you have around those vulnerabilities. If that’s an area where you think you can see an immediate advantage, then you should focus on that first, get that done effectively, and then look at the other places where you could put intelligence.

One other area would be in security operations. If you’ve already invested in a SIEM and it’s already heavily used inside your organization, there’s already a whole process in place for the triaging and the investigating of alerts. If you’re already doing that, then adding contextualized, external threat intelligence to that process will undoubtedly bring an advantage, and it will undoubtedly streamline how that effort is working. Rather than stepping back and saying, “Oh, we need to implement a threat intelligence capability, so let’s try and do a little bit of all of these things.” Instead of doing a little bit of all of them, focus on doing one thing that brings a significant advantage first, and then, once you’ve seen the value in that, it will naturally knock on to other areas of the business.

I think the other really important thing is think about it holistically. If you’re a security professional, and you’re thinking, “Threat intelligence must exist as a separate silo of its own that’s made up of threat intelligence analysts and all of that kind of thing.” Don’t misunderstand me — that’s great if you can make that investment and you can build that team — that’s kind of the holy grail. But actually, if you’re looking to make improvements to your security, threat intelligence can begin to be embedded in all of these roles that we’ve discussed, so make it a holistic thing. Involve all of the people that need to be involved across a security program to really see the advantages that intelligence can bring.

Dave Bittner:

Our thanks to Chris Pace for joining us.

The report is titled “Busting Threat Intelligence Myths: A Guide for Security Professionals.” You can find it on the Recorded Future website.

Don’t forget to sign up for the Recorded Future Cyber Daily email, where every day you’ll receive the top results for trending technical indicators that are crossing the web, cyber news, targeted industries, threat actors, exploited vulnerabilities, malware, suspicious IP addresses, and much more. You can find that at recordedfuture.com/intel.

We hope you’ve enjoyed the show and that you’ll subscribe and help spread the word among your colleagues and online. The Recorded Future podcast team includes Coordinating Producer Amanda McKeon, Executive Producer Greg Barrette. The show is produced by Pratt Street Media, with Editor John Petrik, Executive Producer Peter Kilpe, and I’m Dave Bittner. Thanks for listening.

Related