The Right Threat Intelligence for Patching
- Gartner argues that the biggest threats are not the ones that risk causing the most damage to you, but simply the vulnerabilities in your organization’s environment that are being actively exploited “in the wild.”
- According to its research, the primary method of compromise for most threats is the exploitation of known but unmitigated vulnerabilities, not zero-day threats or new exploits. This is largely a matter of cost: threat actors will continue to primarily use the most cost-effective and reliable exploits instead of new ones because they too have limited time and resources.
- Perfect security requires a “patch everything, all the time, everywhere” approach — an impossibility in the real world. Instead, create a metric that tracks the overlap between the vulnerabilities in your own organization’s environment and the vulnerabilities that are actively being exploited. When you can’t patch, have a backup plan to mitigate.
Let’s use a metaphor — if patching vulnerabilities to keep your network safe is like getting vaccines to protect yourself from disease, then you need to consider which vaccinations are priorities and which are unnecessary. You may need a flu shot every season to stay healthy, but there’s no need to stay vaccinated against diseases like yellow fever and malaria unless there’s a realistic chance you’ll be exposed to them. That’s why you have to do your research: one of the greatest values of a threat intelligence solution is that it will help you identify the specific vulnerabilities that your organization is at risk for and provide custom solutions.
That’s the premise behind a recommendation made in a recent market guide by Gartner, a technology research company. According to Gartner’s research, only about one-eighth of all vulnerabilities identified in the last decade went on to be actually exploited in real-world attacks, and those vulnerabilities that do get exploited are often reused and leveraged in a wide range of threats.
For that reason, Gartner recommends that organizations shift their thinking about managing vulnerabilities away from ranking threats in terms of severity, and focus first on those vulnerabilities that are actually being exploited in the systems they use. Although both ways of thinking about vulnerabilities are important to consider, ranking and classification systems like Common Vulnerabilities and Exposures (CVE) naming and Common Vulnerability Scoring Systems (CVSSs) don’t measure how threat actors are actually exploiting these vulnerabilities. Relying solely on vulnerability severity is like getting your vaccine for the bubonic plague before your flu shot because it killed more people at some point in history.
Reorienting Your Goals
The perfect security systems should be completely impenetrable and immune to exploitation. Although this ideal is impossible to achieve, it is common sense to orient your vulnerability management toward this goal of unbeatable security, leading to a mentality of “patch everything, all the time, everywhere.” With limited time and resources, this approach naturally leads to a focus on patching the “biggest” vulnerabilities first (usually the ones that could cause the most damage if exploited) and dealing with the smaller problems later, if at all.
But as security breaches have continued without end in the last decade, it has become clear that this approach is misguided. Gartner’s Market Guide makes the case that the best framework for vulnerability management should be identifying the “delta between ‘what can I fix’ and ‘what will make the biggest difference, with the pragmatic reality of the time and resources that I actually have.’”
One possible reason for the disconnect between perceived goals and actual outcomes is an inability of an organization on the defense to put itself in the mind of a threat actor; although it may seem reasonable on our side of the fence that an attacker would seek to exploit the biggest, newest vulnerability in order to reap the largest reward, the truth is that threat actors are just as limited by time and resources — if not more so — meaning they will never have the incentive to try a new line of attack if the same old ones continue to work with decreasing costs and less expertise required over time. Gartner found that vulnerabilities will be used most often by attackers when they are relatively easy to exploit and present in widely used software — exploits that prey on these kinds of vulnerabilities are consistently among the most circulated in forums and attack creation tools today.
With this in mind, the first step in reorienting your vulnerability management is to get your fundamentals right and patch the vulnerabilities that have been exploited in the past, not worry about the newest threat.
The Vulnerability Problem by the Numbers
According to Gartner’s research, about 8,000 vulnerabilities a year were disclosed over the past decade, with only a marginal rise from year to year; during the same period, the amount of new software that has come into use has grown immensely. Among those new vulnerabilities, only about one in eight have gone on to be actually exploited. On the other hand, the number of threats has increased exponentially.
In other words, although the number of breaches and threats has increased over the past 10 years, only a small percentage of these are based on new vulnerabilities. Gartner puts it simply: “In short, more threats are leveraging the same small set of vulnerabilities.”
One particular subset of new vulnerabilities regularly draws an outsize amount of attention: zero-day problems. According to Gartner’s research, the vast majority of “new” malware threats that are labeled as zero days are actually variations on a theme, exploiting the same old vulnerabilities in slightly different ways.
Further, the data shows that the number of vulnerabilities that actually go on to be exploited at day zero make up about 0.4 percent of all vulnerabilities exploited during the last decade. So, although it is not technically incorrect for threat intelligence vendors to label them as zero days, the most effective solution remains to identify and patch the vulnerabilities specific to the software your organization uses — this alone will keep you safe from the majority of supposed zero-day threats.
Over time, threat actors have gotten quicker at exploiting vulnerabilities. The average time it takes between a vulnerability being identified and an exploit appearing “in the wild” has dropped from 45 days to 15 days over the last decade. This has two implications: First, that you have roughly two weeks to patch or remediate your systems against a new exploit, and if you can’t in that timeframe, you should have a plan to mitigate the damage; and second, if a vulnerability is not exploited within about the first two weeks to three months after it is announced, it is statistically unlikely that it ever will be.
What You Can Do
Based on this data, Gartner makes the following suggestions for security managers:
- Start tracking a simple metric that helps you identify the overlap between the vulnerabilities in the systems you use and the ones that are being actively exploited by threat actors. That metric alone, which is not included in most vulnerability ranking and classification systems, will do the most to reduce your risk of being breached. Security operations, analytics and reporting tools, and threat intelligence services help deliver this metric.
- Use protocols like intrusion protection systems, network segmentation, application control, and privileged identity management to mitigate threats and prevent vulnerabilities from being exploited when you can’t patch in time (about two weeks) or there is no patch available. These controls will prioritize the vulnerabilities that are being actively exploited.
For a closer look at many other use cases for implementing threat intelligence, download your complimentary copy of Gartner’s “Market Guide for Security Threat Intelligence Products and Services.”