CVSS Scores: A Useful Guide

Posted: 24th April 2018

Key Takeaways

  • The Common Vulnerability Scoring System (CVSS), a free and industry-standard way of ranking the severity of vulnerabilities, is important for anyone in the cybersecurity industry to understand, both for knowing when to rely on it and when to seek out more information.
  • A vulnerability is typically given a base score in CVSS, which is a rating from zero to 10 that gives an idea of how easy it is to exploit a vulnerability and how damaging it can be. Some vulnerabilities are also given temporal and environmental scores which modify the base score, but many are not.
  • Threat intelligence can provide much more detailed information about how vulnerabilities are actually being exploited “in the wild.” This can result in vastly different rankings from a CVSS base score.

The Common Vulnerability Scoring System is a way of assigning severity rankings to computer system vulnerabilities, ranging from zero (least severe) to 10 (most severe). According to the Forum of Internet Response and Security Teams (FIRST), CVSS is valuable for three main reasons:

  1. It provides a standardized vulnerability score across the industry, helping critical information flow more effectively between sections within an organization and between organizations.
  2. The formula for determining the score is public and freely distributed, providing transparency.
  3. It helps prioritize risk — CVSS rankings provide both a general score and more specific metrics.

For these reasons, it is helpful for everyone in the business to understand how CVSS scores are calculated. But it is also important to recognize the limitations of the system and know when to rely on it and when to get more information from threat intelligence sources.

The CVSS scoring system is now in its third iteration — CVSSv3. A CVSSv3 score has three values for ranking a vulnerability: A base score, which gives an idea of how easy it is to exploit the vulnerability and how much damage an exploit targeting that vulnerability could inflict; a temporal score, which ranks how aware people are of the vulnerability, what remedial steps are being taken, and whether threat actors are targeting it; and an environmental score, which provides a more customized metric specific to an organization or work environment. Each of these three scores is derived from formulas that include different subsets of metrics. Let’s look at each more closely.

Base Score

This number represents a ranking of some of the qualities inherent to a vulnerability, which will not change over time or be dependent on the environment the vulnerability appears in. The base score comes from two subequations which themselves are each made up of a handful of metrics: the Exploitability Subscore and the Impact Subscore.

The Exploitability Subscore is based on the qualities of the vulnerable component itself — their scores define how vulnerable the thing itself is to attack. The higher the combined score, the easier it is to exploit that vulnerability. Each metric here is ranked according to values specific to itself, not according to a numerical score.

  • The Attack Vector (AV) metric describes how easy it is for an attacker to actually access the vulnerability. The score will be higher the more remote an attacker can be — a vulnerability that requires an attacker to be physically present will receive a lower AV score than one that can be accessed through a local network, which will get a lower score than one that can be exploited through an adjacent network, and so on.
  • The Attack Complexity (AC) metric describes what conditions must exist that an attacker cannot control to exploit the vulnerability. A low score means there are no special conditions and an attacker can repeatedly exploit a vulnerability. A high score means an attacker might need to, for example, gather more information on a specific target before succeeding.
  • The Privileges Required (PR) metric describes what level of privileges an attacker must have before they can exploit a vulnerability — none required (the highest score); low privileges, meaning the attack might only affect settings and files at a basic user level; or high privileges required, meaning the attacker will need to have administrative privileges or something similar to meaningfully exploit the vulnerability.
  • The User Interaction (UI) metric describes whether the attacker will need another user to participate in the attack for it to succeed. This is defined as a binary metric for the purposes of scoring — either it’s required or it isn’t.

The Impact Subscore defines how significantly certain properties of the vulnerable component will be affected if it is successfully exploited. The first and most significant measure of impact is the Authorization Scope, or just Scope (S), of the vulnerability. The scope metric gives an idea of how badly an exploited vulnerability can impact other components or resources beyond the privileges directly associated with it. In a sense, this is a measure of the potential a vulnerability has to “break out of prison” and compromise other systems. This is also a binary metric — either an exploited vulnerability can only affect resources at the same level of privilege, or it allows an attacker to reach beyond the authorization privileges of the vulnerable component and impact other systems. When a scope change does not occur, the Impact metric reflects the following three values:

  1. The Confidentiality (C) metric, in a way, is another measure of how much authority an exploited vulnerability provides. Such an exploit might result in no loss of confidentiality; a low degree, where some indirect access to restricted information is possible; or a high degree of loss that can lead to further serious tampering of sensitive information, like access to an administrator’s username and password.
  2. The Integrity (I) metric reflects how much data corruption an exploited vulnerability makes possible. The score can be none; low, where some data can be modified but does not have significant consequences; or high, where there is a complete loss of protection over all data, or the data that is able to be modified would significantly impact the function of the vulnerable component.
  3. The Availability (A) metric is a measurement of the loss of availability to the resources or services of the affected component. This metric is also scored as having no impact; a low impact, where there is some reduced access or interruption; and high, where there is either a complete loss of access or a persistent, serious disruption.

To sum up, a base CVSSv3 score is derived from a formula that takes into account the Exploitability Subscore, which is a measure of how easy it is for a vulnerability to be exploited, and the Impact Subscore, which is a measure of how significantly the vulnerable component will be affected if the vulnerability is successfully exploited. But this base score is really only a hypothetical measurement. It gives the dimensions of a blank canvas and tells you how much it’ll cost, but it doesn’t say what the artist will actually paint. A base CVSSv3 score can also be modified by an additional temporal score and an environmental score. Not every vulnerability assigned a base score will also have temporal and environmental scores calculated.

Temporal Score

This score, which is derived from three metrics, gives a better idea of how threat actors are actually exploiting a vulnerability, as well as what remediations are available.

  1. The Exploit Code Maturity (E) metric reflects how likely it is that a vulnerability will actually be exploited, based on what code or exploit kits have been discovered “in the wild.” This metric can either be assigned a rank of “undefined,” or be given one of four increasingly severe scores: unproven, meaning there is no existence of any known exploits; proof of concept, meaning some code exists but it is not practical to use in an attack; functional, which means that working code exists; or high, which means that either no exploit is required or the code available is consistently effective and can be delivered autonomously.
  2. The Remediation Level (RL) metric measures how easy the vulnerability is to fix. In a sense, it is the counterpoint to the Exploit Code Maturity metric. It can also be either undefined or measured in four degrees: a remedial solution is unavailable; an unofficial workaround exists; there is a temporary fix; or an official fix — a complete solution offered by the vendor — is available.
  3. The Report Confidence (RC) metric defines how confidently it can be said that a vulnerability exists. Vulnerabilities may be identified by third parties but not recognized by the component’s official vendor, or a vulnerability may be recognized but its cause unknown. This metric can either be undefined, or given one of three rankings: unknown, meaning there are some uncertain reports about the vulnerability; reasonable, meaning some major details have been shared and the vulnerability is reproducible but the root cause may remain unknown; or confirmed, where a vulnerability’s cause is known and it is able to be consistently reproduced.

Environmental Score

This score is derived from two subscores: a Security Requirements Subscore, which is defined by the three components of the Impact score (Confidentiality, Integrity, and Availability) as measured within a specific environment, and a Modified Base Score, which reevaluates the metrics defining the base score according to the specific environment of an organization.

The security metric is either not defined, or given one of three scores: low, meaning the loss of confidentiality, integrity, or availability caused by the vulnerability being exploited will not have a major effect on an organization or its employees or customers; medium, where the effect will be significant; or high, where the effect will be catastrophic.

The modified base scores are evaluated in the same way as before, but the specific circumstances of one environment in which the vulnerability may exist are taken into account.

Finally, a vulnerability is assigned a CVSS base score between 0.0 and 10.0 — a score of 0.0 represents no risk; 0.1 – 3.9 represents low risk; 4,0 – 6.9, medium; 7.0 – 8.9, high; and 9.0 – 10.0 is a critical risk score.

Editor’s Note

For more information on vulnerability scoring, see our recent analyses on Chinese vulnerability data:

Why Defenders Need to Look Beyond CVSS Scores

CVSS scores can provide a great starting point for evaluating how bad a particular vulnerability is. The base score provides a metric that’s reasonably accurate and easy to understand — provided you know what information the score is conveying. But many vulnerabilities will only be given a base CVSS score, unmodified by a temporal score or an environmental score, meaning the severity ranking of the score is really only telling you how bad the vulnerability is hypothetically, not whether it’s actually being exploited in the wild. It’s like ranking diseases based on how deadly and easily transmitted they are: An all-time top 10 list might have the Black Plague and the Spanish Flu right near the top, but they’re not really diseases you need to worry about today.

That’s where threat intelligence comes in. Good threat intelligence should not simply provide more information in the form of scores and statistics, but also a deeper understanding of how and why threat actors are targeting certain vulnerabilities and ignoring others. That often includes metrics that resemble the temporal and environmental scores only sometimes provided by CVSS. Recorded Future’s native risk scoring system, for example, also ranks vulnerabilities based on criminal adoption, patterns in exploit sharing, and numbers of links to malware. This information often comes from sources that are difficult to access, like forums on the dark web. This results in an “in the wild” severity ranking that is often drastically different from the base score provided by CVSS.

For more insight into available intelligence around vulnerabilities exploited by threat actors in 2017, download your complimentary copy of “The Top 10 Vulnerabilities Used by Cybercriminals in 2017.”