How Agencies Can Refine Threat Intelligence Through Automation
Editor’s Note: In this guest blog post, Federal News Network sits down with Stu Solomon, President at Recorded Future, to discuss the cyber issues inherent to government agencies — and how security intelligence is key to combating attacks.
Government agency cybersecurity personnel have had to work overtime to secure their networks during the pandemic, as the remote workforce pushed the boundaries outside the traditional perimeter. The attack surface grew exponentially as the majority of end points transferred from offices to peoples’ homes. But as legacy security models begin to fail, there is one thing that can help agencies keep up with this changing dynamic: automated threat intelligence.
Threat intelligence ultimately generates decision advantage, and automation helps agencies act at the speed of the adversary while mitigating risk. Threat intelligence has become a big data problem, and the human brain simply can’t make the necessary connections between seemingly disparate data points at that scale.
“It was already a big data problem to start with. But prior to migration to work from home scenarios, the data problem was generally inside of a confined space, our logical and physical boundaries were pretty well defined. And so you could put a ring fence of detective controls around it, understanding the pathways that threats would take to be able to create outcomes in a much more predictable way,” said Stuart Solomon, chief operating officer of Recorded Future. “As we move both to a work from home scenario as well as the continued migration to cloud environments, we’ve simply expanded our boundary, we've simply added additional surface to the attack surface. And we've added more data that we need to find that needle in the haystack.”
At this kind of scale, even just gathering the data, much less processing it, requires significant motivation. And then you have to be able to determine whether the data represents a potential threat, and if so, what it might be. The data has to enable a very binary decision.
“Is this a threat? Or is this not a threat?” Solomon said. “Is this something that I have seen before? Is this something unique, or different? Am I being uniquely targeted? Or is this a ubiquitous campaign that I've just been swept up in? Is this something that's going to create an impact that I'm not comfortable with in my environment, or because of my business processes, or because of my responsibilities within my enterprise that I can't live with?”
Each of those questions requires some amount of automation to answer due to the scale, the dynamic nature of the adversary, and the associated threats. And then you have to take those answers, and turn them into actions, which is the most important part of threat intelligence. This could include writing a firewall rule, blacklisting a series of malicious IP addresses or domains, starting an automated vulnerability scan, or proactively setting up a hunting capability.
Another critical component is data sharing. Threats tend not to be unique; they follow patterns based on the outcome they’re trying to achieve, and they tend to manifest in specific destructive or disruptive capabilities or scenarios.
“There's generally a threat pattern that can be emblematic of similar threats across other platforms, other people, so intelligence sharing and information sharing is hypercritical,” Solomon said. “Look at the Information Sharing and Analysis Centers (ISACs) as an example. The ISACs have spent a lot of time across various sectors of industry focused on the same basic notion, which is there's a collective good that can be gained by understanding attack patterns, vulnerabilities and signatures associated with detecting them earlier on and proactively in their attack life cycles.”
In a data-sharing scenario, once data has been normalized so it can be understood by everyone, it can help all participants learn the evolution of a threat pattern and the associated technical indicators. There are multiple components to this type of dynamic as well. Some of the data has to be for human consumption, in order to understand the threat. Some of it has to be for machine consumption, to tune detective controls. And some of it will be perishable, like data on known malicious IP addresses, which are likely to be changed after a certain amount of time.
Most agencies are already using some level of automation, largely in the technical and detective controls layer. Some have moved on to applying automation to their analytical layer, creating correlations. Solomon said the next step is to start applying that automation to building a picture of normalcy, and detecting deviations from that normalcy. And then ultimately, that will lead to automating actions in response to threat detection.
But Solomon added three caveats to the use of automation. First, it can’t replace humans. What it can do is move them further to the right in the value stream, and allow them to deal with more complex problems. Second, successful automation requires that good processes already be in place. And third, the most important requirement for automation is a triggering event.
“I think categorically, everyone wants automation,” Solomon said. “But doing the basic building blocks to get there requires more refinement on some of the strategies.”
Federal News Network conducted a survey of four agencies about their cyber threat and detection habits. Read the full report here, and learn how agencies are protecting themselves against threats.