How Relativity Is Empowered by Intelligence-Driven Security

Posted: 9th October 2018
How Relativity Is Empowered by Intelligence-Driven Security

Key Takeaways

  • Jerry Finley, director of cybersecurity and deputy CSO at Relativity, joined us for a webinar to talk about how he and his team use threat intelligence to help with their incident response, threat hunting, reporting, and more.
  • Finley sees threat intelligence as having three primary functions: it helps with incident correlation and generation, it fulfills the intelligence requirements set by other teams, and it improves reporting.
  • When responding to incidents generated in their SIEM solutions, the team at Relativity relies heavily on automation. Their runbooks are structured in a way that lets analysts turn toward more project-focused work, threat hunting, and research, and away from menial yet time-consuming tasks.

As director of cybersecurity and deputy chief security officer at Relativity, Jerry Finley has extensive experience with the value that threat intelligence brings to vulnerability management, threat hunting, and more. He recently joined Recorded Future for a webinar on some of the more technical aspects of how he and his team at Relativity, developers of the industry-leading e-discovery software, use threat intelligence.

In the course of his presentation, Finley covered what he sees as the three primary functions of threat intelligence, the essential role that automation plays in his team’s threat intelligence development cycle, and a few specific data science techniques his team uses to identify and correlate threats.

Threat Intelligence Has 3 Primary Functions

Throughout his presentation, Finley discussed what he calls the “three primary functions” of threat intelligence:

  1. Incident Correlation/Generation: This is the application of threat intelligence to detect new threats within a network. Finley calls this function the “lifeblood” of cybersecurity.
  2. Fulfilling Intelligence Requirements: In short, threat intelligence must be able to answer questions with responses that are concise and actionable.
  3. Reporting: Threat intelligence should take the form of reports on critical findings that can be distributed to the right people within an organization. This is about organizing all your research in one place in a result that’s easy to refer back to, helping drive future cycles of intelligence generation.

These three qualities — Finley also refers to them as the three “main priorities” of any threat intelligence program — get at the heart of what separates threat intelligence from the mere collection of data.

In explaining these points further, Finley emphasizes that the most important aspects of useful threat intelligence are that it provides context and leads to action. “We really make an effort to ensure threat intelligence is central to what we do because that context is invaluable in making informed decisions,” Finley says.

Identifying Threat Intelligence Requirements

With that in mind, developing effective threat intelligence involves understanding how it can “augment existing processes.” Finley and his team start by talking to other teams at Relativity to understand “what they need to know that they don’t know today” and “how they would like to get that information.”

“Any deliverables that we produce are scoped to the purpose of the requirement so that the information is actionable and consumable,” Finley explains. “It doesn’t serve either side to make a report that can’t be acted upon.”

Identifying these intelligence requirements from the start is essential because actionability means different things depending on the audience — a sheet full of technical indicators might not mean much to an executive trying to make long-term decisions about what security measures to invest in, but a high-level report might be of equally little use to an analyst trying to make a quick decision.

“You will have a number of different audiences when you’re operating this model, trying to understand those requirements,” says Finley. “The idea is to understand the purpose of what they’re using that information for. Understanding what they intend to use the data for will go a long way in scoping and level-setting that information.”

A Runbook for Incident Response, Investigation, and Remediation

Finley then walked through a typical runbook his team uses for investigating and remediating threats. Having a set of standard rules and procedures in place helps his team make sure that they’re doing everything they need to do in the most efficient and productive manner.

The first step — or really, the “zeroth step” — happens when a flag is thrown up by their SIEM solution. After an incident is generated by an alert originating from the SIEM, the standard incident response perspective is often to send it directly to a SOC analyst to review.

What Finley and his team like to do first is to pass the incident through a security orchestration and response (SOAR) solution to research the event before an analyst has to spend time (or waste time, if it’s a false positive) looking at it.

After that, if there’s something still worth investigating, the team will gather information from a number of different data sources. At this stage, they rely on a few malware sandboxes that they leverage in slightly different ways — but as much as possible, their function is to automate the simpler and more repetitive tasks.

After all that, if an incident still needs manual review, it’ll get passed along to a SOC analyst. Finley explains that his analysts then review the evidence and gather more information through SIEM or SOAR solutions using toolkits that they refine over time.

The goal of structuring their runbooks this way is to direct the focus of the analysts toward more project-focused work, threat hunting, and research, and away from menial yet time-consuming tasks that can plague incident response teams.

5 Ways to Hunt Threats

Finley also discussed in detail the different data processing techniques he and his team use when hunting for anomalies in their systems. The team might begin with a specific goal in mind, but more often is the case that they turn up “collateral findings” in the course of their investigation. He outlined five techniques used by his team:

  1. Querying Data for Specific Results: The simplest technique, this involves using a console such as Splunk or Elasticsearch to research a specific event or emerging campaign, and then compare the results to data within your internal environment.
  2. Clustering/Grouping: This technique involves grouping data around a shared data point to find a consistent set of multiple characteristics or to identify outliers. “This can be especially useful when we start comparing clusters to one another,” Finley says.
  3. Stack Counting: A “low-cost, high-reward” way to hunt for anomalies, stack counting involves taking aggregate counts of values within a data set and then looking into the lowest values, representing rare agents. Researching these outliers might turn up expected users — or they might be malicious.
  4. Scatter/Box Plotting: These relatively straightforward graphing techniques can help analysts visually identify relationships and distributions between two variables in order to identify anomalies in the patterns. “Ultimately, what we’re trying to do here,” says Finley, “is establish the normal curve and then identify and research any outlying characteristics.”
  5. Isolation Forests: Another method for identifying outliers, isolation forests involve finding the shortest path in a binary tree by measuring the average distance from the root data point to a specific observable. In other words, this means measuring the likeness of certain values based on the expected normal distribution. Whatever stands out is worth researching further.

Looking back at the three primary functions of threat intelligence that Finley lays out — that it should generate and correlate incidents in one place, that it should fulfill intelligence requirements, and that it should be presented in reports — it’s clear why automation plays such a big role in Relativity’s application of threat intelligence.

Without automation, analysts simply don’t have time to focus on the critical aspects of developing effective, actionable threat intelligence. But when they don’t have to worry about manually sorting through countless meaningless alerts and false positives, they can take the time to actually perform analyses, using data science techniques and deductive reasoning to create meaningful reports that decision makers can take action on.

To see all of the topics covered in Finley’s full presentation, request the free webinar recording.