Blog

Key Threat Intelligence Metrics for Your Security Strategy

Posted: 25th January 2018
By: GAVIN REID

Key Takeaways

  • When it comes to threat intelligence, finding both easy and useful metrics can be challenging.
  • Focus on metrics that show or track completed work, versus just finding the proverbial needles in haystacks.
  • Don’t fall into the metrics trap of counting things just because you can. Make sure the metrics you choose represent a story that is valuable to your organization.
  • The metrics you track will become much more valuable over time, as trending changes to your baseline can point out amazing things.

In my career, I have created threat intelligence teams for the public sector, financial sector, and large commercial organizations. There were times when I struggled with good measurements to drive better usability, accuracy, and capability. I am currently helping a customer who is starting a threat intelligence team in a very metrics-driven organization. While we have some pretty well-trodden paths to integrate security metrics into incident response, vulnerability management, and security architecture, I realized that we don’t have the same capabilities for threat intelligence.

The 2 Types of Security Metrics

Before we start examining potential security metrics for threat intelligence, it’s helpful to remember that there are two types of commonly created security metrics. The first type would be a metric to help move the needle as far as bettering the organization’s security posture. The second would be a metric that relates to a team’s efficacy in doing their job. The two are related, for sure, but they are not the same. Metrics are only as useful as the value they add to the organization — with little or no value they are just “counting,” not metrics. It’s also important to note that with whatever you choose to record, you should also have periodic reviews to ensure that the metrics are offering the value you expected, and to see if the business drivers for your organization have changed enough to warrant changes in how you measure them.

The esoteric nature of threat intelligence metrics makes them hard to track and communicate. While threat intelligence can be seen as helping to find the needle in the haystack, I highly recommend you don’t only record the number of needles found, or you will be heading toward a bad outcome. Needles are rare in haystacks and not the best indicator of overall work output (or efficacy). Instead, make sure you track what you’re doing to find the needles. This sort of metric shows the team that you’re doing your due diligence to protect the organization, and it’s great for audit trails. It’s also important to understand that in tracking these metrics, or any metrics following the changes, the story they tell will be greatly enhanced.

Establish a Baseline

Phase one of any metrics program should be establishing a baseline of what “normal” looks like. A good threat intelligence team creates materials that the organization can use to make decisions and improve high-level processes, such as what security architecture to prioritize, or more tactical processes, like what to patch and when. Any threat intelligence metric needs to track the efficacy in helping the organization make those choices.

So, with all that being said, what do threat intelligence teams track? Note that the below list isn’t exhaustive, but more of a high-level list of some of the metrics in common use. Most of these metrics focus on getting better at security. Tracking critical resources, like time, can help trend team efficacy.

Input

  • How many alerts were enhanced with intelligence?
  • How many feeds are being ingested?

Analysis

  • Number of new external campaigns/threat groups tracked.
  • Vulnerabilities that are being tracked.
  • Number of techniques, tactics, and procedures (TTPs) detections added to SOC workflow (Yara, IDS sigs, etc).
  • Number of indicators of compromise (IOCs) for detections or mitigations added to SOC workflow (IP addresses, domains, file hashes).
  • Feed efficacy (new data added, old data expired, uniqueness of data).

Output

  • Curated intelligence published (daily, weekly, quarterly, flashes).
  • How many threat intelligence projects were successfully completed?
  • Number of stakeholder interactions influenced by threat intelligence (security architecture, physical security, etc).
  • Due diligence — how often did we check specific feeds with auditable results?
  • How many failures to correctly anticipate events were there?
  • How many analysis judgments were made, but proven later to be incorrect?
  • Number of stakeholders that provided threat intelligence relevant to needs.

Impact

  • Percentile finds directly attributable from threat intelligence — what sources and what source found what.
  • Incident response tickets directly attributable to intelligence.
  • Instances where threat intelligence has led to re-prioritization (e.g., urgent patching, new architectural direction, etc).
  • Number of campaigns/threat groups identified directly targeting the organization.
  • Timeliness of notification versus patching.
  • Timeliness of intelligence versus event.

Don’t Get Caught Up in the Numbers

There are many things you can “count,” but just because you can, I don’t recommend that you do. The philosophy “management needs metrics, so I need to generate them” should be reframed, and only metrics that you can depend on to give you organizationally, relevant results should be considered. I highly recommend that you choose metrics that you have direct control over. If you choose a metric that is dependent on others to complete, you’re at their mercy.

So, an example of a metric you would directly control could be something like “percentile finds directly attributable from threat intelligence.” An example of a metric you may not control could be “vulnerabilities 100 percent patched” or “desktops remediated.” The most important thing to remember is to not get tied down with the numbers, and to not just do it because it is possible. The focus must be on what story you’re trying to tell and how the trending data fits into that. If you don’t have good answers to that, what you’re measuring won’t measure up.

Related