Security Intelligence Handbook Chapter 6: How to Prioritize Patching with Vulnerability Intelligence

Security Intelligence Handbook Chapter 6: How to Prioritize Patching with Vulnerability Intelligence

January 5, 2021 • The Recorded Future Team

Editor’s Note: Over the next several weeks, we’re sharing excerpts from the third edition of our popular book, “The Security Intelligence Handbook: How to Disrupt Adversaries and Reduce Risk with Security Intelligence.” Here, we’re looking at chapter five, “Vulnerability Intelligence.” To read the entire section, download your free copy of the handbook.

There’s a new vulnerability for every day of the week. Yet zero day doesn’t always mean top priority, severity ratings don’t tell the full story, and vulnerability databases can’t publish new flaws fast enough to enable proactive response.

The traditional “whack-a-mole” approach to vulnerability management only leaves teams frustrated, sleepless, and perpetually behind in the race against adversaries. They need a way to stop patching everything and address the vulnerabilities that are most likely to be exploited against the organization — those that present the most risk.

The truth is, many adversaries embrace the “if it ain’t broke, don’t fix it” mentality. If they can exploit vulnerabilities found in the most widely used technologies, such as Microsoft, Apache, Cisco, or Adobe, they often will.

The most effective security intelligence programs score vulnerability risk based on real-time exploitation evidence — driving fast and accurate patching decisions and minimizing unnecessary patching and downtime. By delivering real-time alerts on flaws affecting their organization’s unique tech stack, vulnerability management teams gain unprecedented insights into attacker behavior and the power to disrupt attacks before damage is done.

Find out how to make vulnerability management manageable and proactively reduce risk in “The Security Intelligence Handbook, Third Edition: How to Disrupt Adversaries and Reduce Risk With Security Intelligence.” In this excerpt, which has been edited and condensed, we’ll examine vulnerability management pain points, describe how vulnerability intelligence zeros in on attacker behaviors, and show how a risk-based approach dramatically simplifies operational efficiency:

Vulnerability management is not glamorous, but it is one of the very few ways to be proactive in securing your organization. Its importance cannot be overstated.

The key to success in vulnerability management is to shift the thinking of your security teams from trying to patch everything to making risk-based decisions. That is critical because the vast ocean of vulnerabilities disclosed each year puts incredible stress on the teams responsible for identifying vulnerable assets and deploying patches. To make smart risk based decisions, take advantage of more sources of security intelligence.

The Vulnerability Problem by the Numbers

According to the Gartner Market Guide for Security Threat Intelligence Products and Services, about 8,000 vulnerabilities a year were disclosed over the past decade. The number increased only slightly from year to year, and only about one in eight of those vulnerabilities were actually exploited. However, during the same period, the amount of new software coming into use grew immensely, and the number of threats has increased exponentially.

In other words, although the number of breaches and threats has increased over the past 10 years, only a small percentage were based on new vulnerabilities. As Gartner puts it, “More threats are leveraging the same small set of vulnerabilities.”

Zero day does not mean top priority

Zero-day threats regularly draw an outsized amount of attention. However, the vast majority of new threats labeled as zero days are actually variations on a theme, exploiting the same old vulnerabilities in slightly different ways. The implication is that the most effective approach to vulnerability management is not to focus on zero-day threats, but rather to identify and patch the vulnerabilities in the software your organization uses.

Time is of the essence

Threat actors have gotten quicker at exploiting vulnerabilities. According to Gartner, the average time it takes between the identification of a vulnerability and the appearance of an exploit in the wild has dropped from 45 days to 15 days over the last decade.

This trend has two implications for vulnerability management teams:

  1. You have roughly two weeks to patch or remediate your systems against a new exploit.
  2. If you can’t patch in that timeframe, you need a plan to mitigate the damage.

Research from IBM X-Force shows that if a vulnerability is not exploited within two weeks to three months after it is announced, it is statistically unlikely that it ever will be exploited. Therefore “old” vulnerabilities are usually not a priority for patching.

Assess Risk Based on Exploitability

Consider this comparison: If patching vulnerabilities to keep your network safe is like getting vaccines to protect yourself from disease, then you need to identify which vaccinations are priorities and which are unnecessary. You may need a flu shot every season to stay healthy, but there’s no need to stay vaccinated against yellow fever or malaria unless you will be exposed to them.

Two of the greatest values of a vulnerability intelligence solution are identification of specific vulnerabilities that represent actual risk to your organization and visibility into their likelihood of exploitation.

Figure 6-1 illustrates the point. Thousands of vulnerabilities have been disclosed. Hundreds are being exploited, and some number of vulnerabilities exist in your environment. You really only need to be concerned about the ones that lie within the intersection of those last two categories — vulnerabilities that are in your environment and are actively being exploited.

Severity ratings are often misleading

Ranking threats in terms of severity is a mistake that vulnerability managers make regularly. Classification and ranking systems like Common Vulnerabilities and Exposures (CVE) naming and the Common Vulnerability Scoring System (CVSS) don’t take into account whether threat actors are actually exploiting vulnerabilities right now in your industry or locations. Relying solely on vulnerability severity is often like getting a vaccine for the bubonic plague instead of a flu shot because the plague killed more people at some point in history.

The Genesis of Security Intelligence: Vulnerability Databases

Vulnerability databases consolidate information on disclosed vulnerabilities and also score their exploitability.

In fact, one of the very first forms of security intelligence was NIST’s National Vulnerability Database (NVD). It centralizes information on disclosed vulnerabilities to make it easier for organizations to see if they were likely to be affected. For more than 20 years, the NVD has collected information on almost 150,000 vulnerabilities, making it an invaluable source for information security professionals. Nations including China and Russia have followed NIST’s lead by setting up vulnerability databases.

There are two significant limitations to most vulnerability databases:

  1. They focus on technical exploitability rather than active exploitation.
  2. They are not updated fast enough to provide warning of some quickly spreading threats.

Exploitability versus exploitation

Information in vulnerability databases is almost entirely focused on technical exploitability – a judgment of how likely it is that exploiting a particular vulnerability will result in greater or lesser damage to systems and networks. In the NVD, this is measured through the CVSS scoring system.

Technical exploitability and active exploitation are not the same thing, though. CVSS base scores provide a metric that’s reasonably accurate and easy to understand, but you need to know what information the score is conveying. Unless a base score is modified by a temporal score or an environmental score (https://www.first.org/cvss/calculator/3.0), it really only tells you how bad the vulnerability is hypothetically, not whether it’s actually being exploited in the wild.

Figure 6-2 shows the kind of security intelligence that a vulnerability intelligence tool provides. In this case, the risk a vulnerability poses is determined based on reports involving the CVE’s appearance before it has been assigned a CVSS score by NVD.

An object lesson in the difference between the NVD’s “official risk” and “real risk” from a vulnerability in the wild is CVE2017-0022. Despite having a CVSS severity score of only 4.3 (in the medium range), Recorded Future considered it one of the top 10 vulnerabilities in use in 2017. The real risk was very high because threat actors added this vulnerability to the widespread Neutrino Exploit Kit, where it performed a critical role checking whether security software is installed on a target system.

Next week versus now

Lack of timeliness is another shortcoming of many vulnerability databases. For example, an analysis by Recorded Future found that 75 percent of disclosed vulnerabilities appear on other online sources before they appear in the NVD — and on average it takes those vulnerabilities a week to show up there. This is a very serious problem, because it handicaps security teams in the race to patch a vulnerability before adversaries are able to exploit it, as illustrated in Figure 6-3.

The informal way in which vulnerabilities are disclosed and announced contributes to the delay in recognizing them in vulnerability databases. Typically, a vendor or researcher discloses the vulnerability to the NVD, which assigns a CVE number and begins an analysis. In the meantime, the vendor or researcher publishes more information on its own blog or a social media account. Good luck collating data from these disparate and hard-to-find sources before threat actors develop proof-of-concept malware and add it to exploit kits!

For details on the processes that threat actors use to exploit vulnerabilities, see the Recorded Future blog post “Behind the Scenes of the Adversary Exploit Process.”

Vulnerability Intelligence and Real Risk

The most effective way to assess the true risk of a vulnerability to your organization is to combine all of the following:

  • Internal vulnerability scanning data
  • External intelligence from a wide variety of sources
  • An understanding of why threat actors are targeting certain vulnerabilities and ignoring others

Internal vulnerability scanning

Almost every vulnerability management team scans internal systems for vulnerabilities, correlates the results with information reported in vulnerability databases, and uses the correlation to determine what to patch. This is a basic use of operational security intelligence, even if we don’t usually think of it that way.

Conventional scanning is an excellent way to deprioritize vulnerabilities that don’t appear on your systems. By itself, however, scanning is not an adequate way to accurately prioritize vulnerabilities that are found.

Risk milestones for vulnerabilities

One powerful way to assess a vulnerability’s risk is to look at how far it has progressed from initial identification to availability, weaponization, and commoditization in exploit kits.

The level of real risk increases dramatically as the vulnerability passes through the milestones shown in Figure 6-4. Broad-based vulnerability intelligence reveals a vulnerability’s progress along this path.

Understanding the adversary

As discussed elsewhere in this book, good security intelligence should not simply provide information in the form of scores and statistics. That’s why vulnerability intelligence leads to a deeper understanding of how and why threat actors are targeting certain vulnerabilities and ignoring others. Below we discuss sources of intelligence that contribute to this understanding.

Sources of Intelligence

Data from asset scans and external vulnerability databases are only the starting points for generating intelligence that enables you to assess the risk of vulnerabilities. Unless vulnerability intelligence includes data from a wide range and variety of sources, analysts risk missing emerging vulnerabilities until it’s too late.

Valuable sources of information for assessing true risk to your business include:

  • Information security sites like vendor blogs, official disclosure information on vulnerabilities, and security news sites
  • Social media, where link sharing provides jumping off points for uncovering useful intelligence
  • Code repositories such as GitHub, which yield insights into the development of proof-of-concept code for exploiting vulnerabilities
  • Paste sites such as Pastebin and Ghostbin (which are sometimes wrongly defined as dark web sources) that often house lists of exploitable vulnerabilities
  • The dark web, composed of communities and marketplaces with a barrier to entry, where exploits are developed, shared, and sold
  • Forums with no barrier to entry or requirement to be using specific software, where threat actors exchange information on vulnerabilities and exploits
  • Technical feeds that deliver data streams of potentially malicious indicators, which add useful context around the activities of malware and exploit kits

Use Cases for CrossReferencing Intelligence

To accurately assess real risk, you must be able to correlate information from multiple sources. Once you begin to understand how individual references combine to tell the whole story, you will be able to map the intelligence you have to the risk milestones a vulnerability typically goes through.

For example, you might notice a new vulnerability disclosed on a vendor’s website. Then, you discover a tweet with a link to proof-of-concept exploit code on GitHub. Later, you find the code is being sold on a dark web forum. Eventually, you might see news reports of the vulnerability being exploited in the wild.

Here’s another example. The website of an Information Sharing and Analysis Center (ISAC) for your industry shows that an organization like yours has been victimized by an exploit kit that attacks a vulnerability in a specialized, industry-specific software application. You find that there are four copies of that software in corners of your organization that have not been patched in three years.

Cross-referencing this kind of intelligence enables you to move away from a “race to patch everything” mode of operation, and empowers you to focus on the vulnerabilities that present the greatest actual risk.

Bridging the Risk Gaps Among Security, Operations, and Business Leadership

In most organizations, the responsibility for protecting against vulnerabilities falls on the shoulders of two teams:

  1. The vulnerability management team runs scans and prioritizes vulnerabilities based on potential risk.
  2. The IT operations team deploys patches and remediates the affected systems.

This dynamic creates a tendency to approach vulnerability management “by the numbers.” For example, the vulnerability management team in the security organization might determine that several vulnerabilities in Apache web servers pose a very high risk to the business and should be given top priority. However, the IT operations team may be supporting a lot more Windows systems than Apache servers. If team members are measured strictly on the number of systems patched, they have an incentive to keep their focus on lower priority Windows vulnerabilities.

Intelligence on exploitability also prepares your organization to strike the correct balance between patching vulnerable systems and interrupting business operations. Most organizations have a strong aversion to disturbing business continuity. However, if you know that a patch will protect the organization against a real, imminent risk, then a short interruption is completely justified.

The risk milestones framework outlined in Figure 6-4 makes it much easier to communicate the danger of a vulnerability across your security and operations teams, up through senior managers, and even to the board of directors. This level of visibility into the rationale behind decisions made around vulnerabilities will increase confidence in the security team across your entire organization.

To reduce the gap between the vulnerability management and IT operations teams, introduce risk of exploitability as a key driver for prioritizing patches. Arming the vulnerability management team with more contextualized data about the risk of exploitability will enable them to pinpoint a smaller number of high-risk CVEs, which will result in them making fewer demands on the operations team. The operations team will then be able to give top priority to that small number of critical patches, and still have time to address their other goals.

Get ‘The Security Intelligence Handbook’

This chapter is one of many in our new book that demonstrates how to disrupt adversaries and measurably reduce risk with security intelligence at the center of your security program. Subsequent chapters explore different use cases, including the benefits of security intelligence for brand protection, third-party risk management, security leadership, and more.

Download your copy of “The Security Intelligence Handbook” now.

New call-to-action

Related Posts

SolarWinds: The CSO Perspective

SolarWinds: The CSO Perspective

January 11, 2021 • The Recorded Future Team

Q&A with Gavin Reid, Recorded Future CSO Information is still coming to light surrounding...

SolarWinds Attribution: Are We Getting Ahead of Ourselves?

SolarWinds Attribution: Are We Getting Ahead of Ourselves?

December 30, 2020 • John Wetzel

Note: This blog is an abstract of an in-depth analysis on SolarWinds attribution Download the...

SolarWinds: What the Intelligence Tells Us

SolarWinds: What the Intelligence Tells Us

December 23, 2020 • David SooHoo

The SolarWinds attack has quickly attained status as the biggest hack of 2020 By compromising...