How to Increase Incident Response Efficiency With Security Intelligence

Posted: 10th March 2020

Editor’s Note: Over the next several weeks, we’re sharing excerpts from the second edition of our popular book, “The Threat Intelligence Handbook: Moving Toward a Security Intelligence Program.” Here, we’re looking at chapter four, “Threat Intelligence for Incident Response.” To read the entire chapter, download your free copy of the handbook.

Now more than ever, incident response (IR) teams on the front lines of cyber defense require security intelligence to help cut through the noise and respond quickly to real threats.

In security operations center (SOC) environments where minutes matter, a constant barrage of low-quality alerts from threat data feeds and across multiple, disconnected systems equates to too much information and too little time to investigate it all properly.

Without critical context on which alerts represent true security incidents, teams are left struggling through mountains of redundancies and false positives. This hinders team productivity and pushes hard-working analysts to the brink of burnout. In fact, nearly half of SOC teams face false-positive rates of 50% or higher. And the average organization wastes between 424 and 286 hours per week investigating false positives alone.

Imagine spending half of your time at work chasing dead ends that force you to work harder and longer just to tread water, and it’s easy to see why security analysts are exhausted, stressed, and increasingly dissatisfied with their jobs — often leaving these roles within the first one to three years.

This reactive cybersecurity approach and constant churn has a ripple effect across the entire security organization. An ESG report found that 41% of security staff is forced to spend a disproportionate amount of time on incident response, cutting into valuable time allocated for key planning, strategy, and training efforts.

When intelligence is wisely employed, it provides the necessary context to help security analysts focus their efforts, amplify their impact, and boost the efficacy of the overall security team. Putting security intelligence to work also helps teams shift toward a more holistic, proactive approach, which plays a critical role in empowering a productive, engaged workforce and attracting and retaining top talent.

We explore this topic more closely in the following excerpt from “The Threat Intelligence Handbook: Moving Toward a Security Intelligence Program,” which has been edited and condensed for clarity.

Threat Intelligence for Incident Response

Of all security groups, incident response teams are perhaps the most highly stressed. Among the reasons are:

  • Cyber incident volumes have increased steadily for two decades.
  • Threats have become more complex and harder to analyze, and staying on top of the shifting threat landscape has become a major task in itself.
  • When responding to security incidents, analysts are forced to spend a lot of time manually checking and disseminating data from disparate sources.
  • Containment of attacks and eradication of vulnerabilities continually grows more difficult.

As a result, incident response teams routinely operate under enormous time pressures and often are unable to contain cyber incidents promptly.

Continuing Challenges

While it’s difficult to be precise about the number of incidents experienced by a typical organization, there is no doubt that cyberattack volume is growing rapidly. According to SonicWall, the global volume of malware attacks increased by more than 18 percent during 2017 alone. Other popular attack vectors, such as encrypted traffic and phishing, are also seeing substantial increases in volume every year. While some of this growing pressure is mitigated by preventative technologies, a huge additional strain is nonetheless being placed on incident response teams because of the following four factors:

1. A skills gap.

Incident response is not an entry-level security function. It encompasses a vast swath of skills, including static and dynamic malware analysis, reverse engineering, digital forensics, and more. It requires analysts who have experience in the industry and can be relied upon to perform complex operations under pressure.

The highly publicized cybersecurity skills gap has grown consistently wider over the past decade. According to a 2017 research report by ISSA, almost three-quarters of security professionals claim their organization is affected by the global skills shortage. In their most recent Global Information Security Workforce Study, Frost & Sullivan predicts the skills gap will grow to 1.8 million workers by 2022.

2. Too many alerts, too little time.

In tandem with the lack of available personnel, incident response teams are bombarded by an unmanageable number of alerts. According to the Ponemon "Cost of Malware Containment" report, security teams can expect to log almost 17,000 malware alerts in a typical week. That’s more than 100 alerts per hour for a team that operates 24/7. And those are only the alerts from malware incidents.

To put these figures in perspective, all these alerts can lead security teams to spend over 21,000 man-hours each year chasing down false positives. That’s 2,625 standard eight-hour shifts needed just to distinguish bad alerts from good ones.

3. Time to response is rising.

When you have too few skilled personnel and too many alerts, there’s only one outcome: the time to resolve genuine security incidents will rise. According to analysis of source data from a recent Verizon Data Breach Investigations Report, while median time to incident detection is a fairly reasonable four hours, median time to resolution (MTTR) is more than four days.

Of course, cybercriminals have no such time constraints. Once they gain a foothold inside a target network, time to compromise is usually measured in minutes.

4. A piecemeal approach.

Most organizations’ security groups have grown organically in parallel with increases in cyber risk. As a result, they have added security technologies and processes piecemeal, without a strategic design.

While this ad hoc approach is perfectly normal, it forces incident response teams to spend a lot of time aggregating data and context from a variety of security technologies (e.g., SIEM, EDR, and firewall logs) and threat feeds. This effort significantly extends response times and increases the likelihood that mistakes will be made.

The Reactivity Problem

Once an alert is flagged, it must be triaged, remediated, and followed up as quickly as possible to minimize cyber risk.

Consider a typical incident response process:

  • Incident Detection: Receive an alert from a SIEM, EDR, or similar product
  • Discovery: Determine what’s happened and how to respond
  • Triage and Containment: Take immediate actions to mitigate the threat and minimize damage
  • Remediation: Repair damage and remove infections
  • Push to BAU: Pass the incident to “business as usual” teams for final actions

Notice how reactive this process is. For most organizations, nearly all the work necessary to remediate an incident is back-loaded, meaning it can’t be completed until after an alert is flagged. Although this is inevitable to some degree, it is far from ideal when incident response teams are already struggling to resolve incidents quickly enough.

Minimizing Reactivity in Incident Response

To reduce response times, incident response teams must become less reactive. Two areas where advanced preparation can be especially helpful are identification of probable threats and prioritization.

Identification of Probable Threats

If an incident response team can identify the most commonly faced threats in advance, they can develop strong, consistent processes to cope with them. This preparation dramatically reduces the time the team needs to contain individual incidents, prevents mistakes, and frees up analysts to cope with new and unexpected threats when they arise.


Not all threats are equal. If incident response teams can understand which threat vectors pose the greatest level of risk to their organization, they can allocate their time and resources accordingly.

Strengthening Incident Response With Threat Intelligence

It should be clear from our discussion so far that security technologies by themselves can't do enough to reduce pressure on human analysts. Threat intelligence can minimize the pressure on incident response teams and address many of the issues we have been reviewing by:

  • Automatically identifying and dismissing false positive alerts

  • Enriching alerts with real-time context from across the open and dark web

  • Assembling and comparing information from internal and external data sources to identify genuine threats

  • Scoring threats according to the organization’s specific needs and infrastructure

    In other words, threat intelligence provides incident response teams with exactly the actionable insights they need to make faster, better decisions, while holding back the tide of irrelevant and unreliable alerts that typically make their job so difficult.

Essential Characteristics of Threat Intelligence for Incident Response

Now it’s time for us to examine the characteristics of a powerful threat intelligence capability, and how they address the greatest pain points of incident response teams.


To be valuable to incident response teams, threat intelligence must be captured automatically from the widest possible range of locations across open sources, technical feeds, and the dark web. Otherwise, analysts will be forced to conduct their own manual research to ensure nothing important has been missed.


It’s impossible to avoid all false positives when working to identify and contain incidents. But threat intelligence should help incident response teams quickly identify and purge false positives generated by security technologies such as SIEM and EDR products.

There are two categories of false positives to consider:

  • Alerts that are relevant to an organization but are inaccurate or unhelpful
  • Alerts that are accurate and/or interesting but aren’t relevant to the organization

Both types have the potential to waste an enormous amount of incident response analysts’ time. Advanced threat intelligence products are now employing machine learning technology to identify and discard false positives automatically and draw analysts’ attention to the most important and most relevant intelligence.


Not all threats are created equal. Even among relevant threat alerts, some will inevitably be more urgent and important than the rest. An alert from a single source could be both accurate and relevant, but still not particularly high in priority. That is why context is so important: it provides critical clues about which alerts are most likely to be significant to your organization.

Contextual information related to an alert might include:

  • Corroboration from multiple sources that the same type of alert has been associated with recent attacks
  • Confirmation that the alert has been associated with threat actors known to be active in your industry
  • A timeline showing that the alert occurred slightly before or after other events linked with attacks

Modern machine learning and artificial intelligence (AI) technologies make it possible for a threat intelligence solution to consider multiple sources concurrently and determine which alerts are most important to a specific organization.


Among the most critical functions of a threat intelligence system is the ability to integrate with a broad range of security tools, including SIEM and incident response solutions, examine the alerts they generate, and:

  • Determine whether each alert should be dismissed as a false positive
  • Score the alert according to its importance
  • Enrich the alert with valuable extra context

This integration eliminates the need for analysts to manually compare each alert to information in diverse security and threat intelligence tools. Even more important, integration and automated processes can filter out a huge number of false positives without any checking by a human analyst. The amount of time and frustration this capability saves makes it perhaps the single greatest benefit of threat intelligence for incident response teams.

Get 'The Threat Intelligence Handbook'

Not included here are three use cases — and one “abuse” case — that explore in more specific terms the value that threat intelligence can bring to any incident response operation.

The use cases unpack how threat intelligence helps incident response teams successfully make a stance shift from reactive to proactive, how to get context to better scope and contain incidents, and how to quickly remediate data loss. The abuse case looks at the risks of committing only halfheartedly to a threat intelligence solution — for example, by incorporating a few threat data feeds and nothing else. Doing so can just add more work for security analysts.

To read the full chapter, “Threat Intelligence for Incident Response,” which includes these use cases alongside other helpful tips and resources, download your free copy of “The Threat Intelligence Handbook” today.