Facts, Assumptions, and Running Estimates in Cyber Threat Intelligence
By Dan Bearl on November 27, 2018
Over the six years since I transitioned from active duty in the U.S. Army to the U.S. Army Reserves and started working in cybersecurity, I continue to find parallels and intersections between my military and civilian careers.
Most recently, during a staff planning exercise with my Reserve unit, I was struck by the similarities between the way Army planners frame information during the planning process and how cyber threat intelligence analysts and security managers need to approach threat intelligence (as an aside, if you haven’t had the pleasure of performing the Military Decision Making Process (MDMP), you are missing out on a really good time).
Prior to this exercise, our commander coached us on some basic guidelines and his expectations for how we approached the mission. One key point stuck out to me: “Treat assumptions as facts until they’re disproved, but also constantly seek to disprove your assumptions.”
This thought isn’t groundbreaking or anything I hadn’t heard before, and I don’t think it was intended to be — it was more a reminder to stick to the fundamentals of planning doctrine and to be honest about and unattached to the outcomes. A little Army terminology might help explain exactly what he meant.
Much of military planning is built on “running estimates,” which essentially means to seek to describe the current situation and how the situation will change over time. For example, you might say, “We currently have X gallons of fuel on hand. Given the anticipated resupply and consumption, we expect to have Y gallons of fuel on hand in 24 hours, and Z gallons in 48 hours.” For intelligence assessments, you might say, “We know the enemy is currently at this location and this strength. Based on our understanding of the enemy’s doctrine and objectives, in 24 hours, we expect them to be here, and in 48 hours, there.”
Another good analogy is a weather report. We know what the current weather is because we can directly measure it. We estimate the future weather based on experience and statistical models. Essentially, running estimates begins with what we know about the current situation, and extrapolating the future from that point.
As it applies to cyber threat intelligence, there are things we can observe as statements of fact. For example, things like:
- A host at a particular IP address is behaving like a RAT controller.
- A piece of malware is coded to call back to specific domains or IP addresses.
- A hash is corresponding to a known malicious file.
- A user in a dark web forum is selling access to a corporate network.
From these statements, we can build assumptions, and then prioritize limited security resources. For example, if we observe a known malicious indicator on our network, we can assume it is up to no good, or if a certain threat actor has used specific infrastructure and capabilities against an industry peer, we can assume they will attempt to do the same to us.
Threat Intelligence is About Time and Context
The challenge with threat intelligence is, what we think of as facts and assumptions in this space can be extremely ephemeral. That ties in to another thing my commander said: “All estimates are best effort and point-in-time.”
This was a little expectation management, but also a reminder to the staff that none of these facts and assumptions are useful forever, and must be continually updated with the most current information. We can estimate where we expect to be tomorrow, or three days from now, but those estimates are only as accurate as our assumptions, and there are endless opportunities for unforeseen factors to confound our plans.
You can come to the right conclusions when given incomplete information, and still be wrong in an objective sense. In these cases, we need to accept that reality and adjust our plans accordingly. The same is absolutely true in cybersecurity programs.
The Difference Between Facts and Assumptions
Most facts become assumptions as soon as we stop observing them. When an IP address, domain name, or any adversary infrastructure is described as “malicious,” that description applies to the point in time when they were observed. Once we cease to observe the malicious activity, that malicious designation slips from fact to assumption. That doesn’t mean it won’t be proven again in the future, but it must be rechecked regularly to maintain confidence in that judgment.
And sure, different facts will have different shelf lives. Evidence of the exploitation of a vulnerability might be reasonably valid for weeks or months, IP addresses and domains might cease to be relevant in hours or days, or hashes might never cease to positively indicate some malicious activity (barring improbable hash collisions, of course).
Shifting Threat Landscapes
The end result is that the pictures painted by cyber threat intelligence teams are volatile landscapes, complicated by shifts in adversary behavior and gaps in knowledge. Reports and other products generated by cyber threat intelligence programs are best effort and point-in-time. They are almost certainly based primarily on assumptions. Treat them as fact, but continually reassess and attempt to disprove them. The aggregate result of that back and forth is a clearer picture of the threat landscape than any single report can provide.
Cyber threat intelligence program managers should keep this in mind when developing their program and processes. It’s a combination of measured expectations and due diligence — we accept that we will never have a wholly accurate picture of the threats facing our organizations, but we make a best effort to get as close to reality as possible.
To learn more about how threat intelligence can bring value to your security strategy, request a personalized demo.