How Threat Intelligence Combats Imperfect Information in Incident Response

November 6, 2018 • Zane Pokorny

Editor’s Note: Over the next several months, we’ll be sharing excerpts from our new book, “The Threat Intelligence Handbook.” Here, we’re looking at the third chapter, “Threat Intelligence for Incident Response.” To read the full chapter, download your free copy of the handbook.

Let’s say a team of firefighters responding to a call arrives on the scene to find two houses on fire. The houses are about the same size and seem to be burning at the same rate, but the firefighters only have the resources to try to stop one. How do they decide which fire to put out?

They’re dealing with a problem of imperfect information, and there’s little time to do any research. Without critical context — facts like whether there are people inside one house or the other, or how sturdy and fire-resistant the construction is — the firefighters will have to make their best guess. Choosing wrongly can have tragic consequences.

In cybersecurity, incident response teams face conceptually similar challenges. With too little time and not enough information, teams can struggle to determine which alert represents a critical incident and which may just be a redundancy or false positive. Ironically, the problem of imperfect information in the age of the internet frequently manifests not as a result of having too little data, but having too much — security practitioners often find themselves sorting through floods of contextless threat data feeds and streams of alerts across multiple, disconnected applications.

Threat intelligence, when employed with wisdom, provides that much-needed context, and it can do so fast enough to make a huge difference for incident response teams, where minutes matter. We’ll look more closely at how in the following excerpt from “The Threat Intelligence Handbook,” which has been edited and condensed for clarity.

Threat Intelligence for Incident Response

Of all security groups, incident response teams are perhaps the most highly stressed. Among the reasons are:

  • Cyber incident volumes have increased steadily for two decades.
  • Threats have become more complex and harder to analyze, and staying on top of the shifting threat landscape has become a major task in itself.
  • When responding to security incidents, analysts are forced to spend a lot of time manually checking and disseminating data from disparate sources.
  • Containment of attacks and eradication of vulnerabilities continually grows more difficult.

As a result, incident response teams routinely operate under enormous time pressures and often are unable to contain cyber incidents promptly.

Continuing Challenges

While it’s difficult to be precise about the number of incidents experienced by a typical organization, there is no doubt that cyberattack volume is growing rapidly. According to SonicWall, the global volume of malware attacks increased by more than 18 percent during 2017 alone. Other popular attack vectors, such as encrypted traffic and phishing, are also seeing substantial increases in volume every year. While some of this growing pressure is mitigated by preventative technologies, a huge additional strain is nonetheless being placed on incident response teams because of the following four factors:

1. A skills gap.

Incident response is not an entry-level security function. It encompasses a vast swath of skills, including static and dynamic malware analysis, reverse engineering, digital forensics, and more. It requires analysts who have experience in the industry and can be relied upon to perform complex operations under pressure.

The highly publicized cybersecurity skills gap has grown consistently wider over the past decade. According to a 2017 research report by ISSA, almost three-quarters of security professionals claim their organization is affected by the global skills shortage. In their most recent Global Information Security Workforce Study, Frost & Sullivan predicts the skills gap will grow to 1.8 million workers by 2022.

2. Too many alerts, too little time.

In tandem with the lack of available personnel, incident response teams are bombarded by an unmanageable number of alerts. According to the Ponemon “Cost of Malware Containment” report, security teams can expect to log almost 17,000 malware alerts in a typical week. That’s more than 100 alerts per hour for a team that operates 24/7. And those are only the alerts from malware incidents.

To put these figures in perspective, all these alerts can lead security teams to spend over 21,000 man-hours each year chasing down false positives. That’s 2,625 standard eight-hour shifts needed just to distinguish bad alerts from good ones.

3. Time to response is rising.

When you have too few skilled personnel and too many alerts, there’s only one outcome: the time to resolve genuine security incidents will rise. According to analysis of source data from a recent Verizon Data Breach Investigations Report, while median time to incident detection is a fairly reasonable four hours, median time to resolution (MTTR) is more than four days.

Of course, cybercriminals have no such time constraints. Once they gain a foothold inside a target network, time to compromise is usually measured in minutes.

4. A piecemeal approach.

Most organizations’ security groups have grown organically in parallel with increases in cyber risk. As a result, they have added security technologies and processes piecemeal, without a strategic design.

While this ad hoc approach is perfectly normal, it forces incident response teams to spend a lot of time aggregating data and context from a variety of security technologies (e.g., SIEM, EDR, and firewall logs) and threat feeds. This effort significantly extends response times and increases the likelihood that mistakes will be made.

The Reactivity Problem

Once an alert is flagged, it must be triaged, remediated, and followed up as quickly as possible to minimize cyber risk.

Consider a typical incident response process:

  1. Incident Detection: Receive an alert from a SIEM, EDR, or similar product
  2. Discovery: Determine what’s happened and how to respond
  3. Triage and Containment: Take immediate actions to mitigate the threat and minimize damage
  4. Remediation: Repair damage and remove infections
  5. Push to BAU: Pass the incident to “business as usual” teams for final actions

Notice how reactive this process is. For most organizations, nearly all the work necessary to remediate an incident is back-loaded, meaning it can’t be completed until after an alert is flagged. Although this is inevitable to some degree, it is far from ideal when incident response teams are already struggling to resolve incidents quickly enough.

Minimizing Reactivity in Incident Response

To reduce response times, incident response teams must become less reactive. Two areas where advanced preparation can be especially helpful are identification of probable threats and prioritization.

Identification of Probable Threats

If an incident response team can identify the most commonly faced threats in advance, they can develop strong, consistent processes to cope with them. This preparation dramatically reduces the time the team needs to contain individual incidents, prevents mistakes, and frees up analysts to cope with new and unexpected threats when they arise.

Prioritization

Not all threats are equal. If incident response teams can understand which threat vectors pose the greatest level of risk to their organization, they can allocate their time and resources accordingly.

Strengthening Incident Response With Threat Intelligence

It should be clear from our discussion so far that security technologies by themselves can’t do enough to reduce pressure on human analysts. Threat intelligence can minimize the pressure on incident response teams and address many of the issues we have been reviewing by:

  • Automatically identifying and dismissing false positive alerts
  • Enriching alerts with real-time context from across the open and dark web
  • Assembling and comparing information from internal and external data sources to identify genuine threats
  • Scoring threats according to the organization’s specific needs and infrastructure

In other words, threat intelligence provides incident response teams with exactly the actionable insights they need to make faster, better decisions, while holding back the tide of irrelevant and unreliable alerts that typically make their job so difficult.

Essential Characteristics of Threat Intelligence for Incident Response

Now it’s time for us to examine the characteristics of a powerful threat intelligence capability, and how they address the greatest pain points of incident response teams.

Comprehensive

To be valuable to incident response teams, threat intelligence must be captured automatically from the widest possible range of locations across open sources, technical feeds, and the dark web. Otherwise, analysts will be forced to conduct their own manual research to ensure nothing important has been missed.

Relevant

It’s impossible to avoid all false positives when working to identify and contain incidents. But threat intelligence should help incident response teams quickly identify and purge false positives generated by security technologies such as SIEM and EDR products.

There are two categories of false positives to consider:

  1. Alerts that are relevant to an organization but are inaccurate or unhelpful
  2. Alerts that are accurate and/or interesting but aren’t relevant to the organization

Both types have the potential to waste an enormous amount of incident response analysts’ time. Advanced threat intelligence products are now employing machine learning technology to identify and discard false positives automatically and draw analysts’ attention to the most important and most relevant intelligence.

Contextualized

Not all threats are created equal. Even among relevant threat alerts, some will inevitably be more urgent and important than the rest. An alert from a single source could be both accurate and relevant, but still not particularly high in priority. That is why context is so important: it provides critical clues about which alerts are most likely to be significant to your organization.

Contextual information related to an alert might include:

  • Corroboration from multiple sources that the same type of alert has been associated with recent attacks
  • Confirmation that the alert has been associated with threat actors known to be active in your industry
  • A timeline showing that the alert occurred slightly before or after other events linked with attacks

Modern machine learning and artificial intelligence (AI) technologies make it possible for a threat intelligence solution to consider multiple sources concurrently and determine which alerts are most important to a specific organization.

Integrated

Among the most critical functions of a threat intelligence system is the ability to integrate with a broad range of security tools, including SIEM and incident response solutions, examine the alerts they generate, and:

  • Determine whether each alert should be dismissed as a false positive
  • Score the alert according to its importance
  • Enrich the alert with valuable extra context

This integration eliminates the need for analysts to manually compare each alert to information in diverse security and threat intelligence tools. Even more important, integration and automated processes can filter out a huge number of false positives without any checking by a human analyst. The amount of time and frustration this capability saves makes it perhaps the single greatest benefit of threat intelligence for incident response teams.

Get the Threat Intelligence Handbook

Not included here are three use cases — and one “abuse” case — that explore in more specific terms the value that threat intelligence can bring to any incident response operation.

The use cases unpack how threat intelligence helps incident response teams successfully make a stance shift from reactive to proactive, how to get context to better scope and contain incidents, and how to quickly remediate data loss. The abuse case looks at the risks of committing only halfheartedly to a threat intelligence solution — for example, by incorporating a few threat data feeds and nothing else. Doing so can just add more work for security analysts.

To read the full chapter, “Threat Intelligence for Incident Response,” which includes these use cases alongside other helpful tips and resources, download your free copy of “The Threat Intelligence Handbook” today.