Podcast

Blending Threat Intelligence With Cyber Risk Frameworks

Posted: 29th October 2018
By: Amanda McKeon
Blending Threat Intelligence With Cyber Risk Frameworks

Our guest today is Rick Tracy. He’s chief security officer at Telos, a cybersecurity, IT risk management and compliance, secure mobility, and identity management company. In addition to his duties as CSO, Rick is co-inventor of Xacta, a cyber risk management platform.

Rick shares his experience from over three decades in the industry, his thoughts on regulations like GDPR and what we might expect to see here in the U.S., how he handles briefing his board of directors, the helpful utility of the NIST framework, and how threat intelligence can inform an organization’s approach to managing risk.

This podcast was produced in partnership with the CyberWire.

For those of you who’d prefer to read, here’s the transcript:

This is Recorded Future, inside threat intelligence for cybersecurity.

Dave Bittner:

Hello everyone, and welcome to episode 80 of the Recorded Future podcast. I'm Dave Bittner from the CyberWire. Our guest today is Rick Tracy. He’s chief security officer at Telos, a cybersecurity, IT risk management and compliance, secure mobility, and identity management company. In addition to his duties as CSO, Rick is co-inventor of Xacta, a cyber risk management platform.

Rick shares his experience from over three decades in the industry, his thoughts on regulations like GDPR and what we might expect to see here in the U.S., how he handles briefing his board of directors, the helpful utility of the NIST framework, and how threat intelligence can inform an organization’s approach to managing risk. Stay with us.

Rick Tracy:

I'm an oddball. I've been with Telos for 32 years, and I have evolved through a number of roles here at Telos over that time and became the CSO … I don't know exactly, but probably 12 or 13 years ago. But that was after I helped start a venture within the company that we called Xacta. It's also a product name that is essentially a risk and compliance management platform that many organizations use to help manage assessment and authorization activities, using the risk management framework as the backdrop.

So I wear two hats in the company. I continue to provide some product vision for Xacta, and I have this other corporate role, where I'm responsible for corporate security as a CSO.

Dave Bittner:

And in the time that you've been there, over 30 years, what have you witnessed? What's been the evolution of security when it comes to cyber?

Rick Tracy:

It's been interesting because if I reflect back to the '90s or so, internet security wasn't even a thing in the early '90s, right? And people were concerned about maybe a little bit of computer security, but it has evolved. Not just in terms of the concerns and so forth, but the language that's used to describe it, from infosec to information security to cybersecurity, from internet security to computer security. The language itself has evolved beyond just what the concerns are now, versus five or 10 or 15 or 20 years ago.

Dave Bittner:

So take me through the process of how Xacta was born.

Rick Tracy:

It was an observation on my part in the mid-’90s, where organizations had this new certification, accreditation requirement that was born out of the Computer Security Act of 1988, or whatever. It was the trigger to all these sorts of security activities that federal agencies had to adhere to. And they weren't well thought through at the time because it was all they really knew. As the requirements evolved over time for DoD and the intelligence community, the processes became more aligned with each other, to the point where we are right now. That's the whole point of the risk management framework — to have a consistent process across the federal government. But prior to that, there was the DTS CAP and DIO CAP for DoD, similar sort of conventions for civilian agencies, and something different for the intelligence community. The RMF has coalesced all of those CNA — now ANA — activities around one framework, called the risk management framework.

So the observation on my part, going back to the early to mid-'90s, is that there's a lot of very consistent documentation and process and so forth that could be automated, but everyone was starting from a blank sheet of paper every time they had to take a system through this ANA — at that time, CNA — process. And so, my thought was process automation — automate as much of the CNA process as you could. So we started a bit at a time, automating various aspects. Building on that to the point where we were able to automate more and more of the process, and it became easier for us once all the federal agencies started to adhere to the risk management framework. Because it wasn't multiple ways of doing business, depending on whether you're a civilian agency or DOD or IC — there was a fairly consistent process across all three of those tiers.

Dave Bittner:

And so, from a practical point of view, what does that look like? If I'm one of those agencies, how do I go about it?

Rick Tracy:

Yeah, well, a way to think about it is that, if you were to stack up all of the missed publications that make up this thing called a risk management framework, it would be a stack of paper on your desk about 18 inches tall. So rather than you having to figure out how to operationalize all those documents — a dozen or so documents, with thousands of pages — to try to figure out how they relate to each other, we created an application that operationalizes those documents in a workflow-enabled, wizard-driven, logic behind the scenes, helping you calculate things like CIA value, understanding which controls are appropriate for your system, based on your calculated CIA value.

I like to equate it to TurboTax. An organization, the system owner, answers questions, inputs data, and information about their systems, their assets, the hardware and software vulnerabilities, test results, that help you demonstrate whether you're compliant with certain controls or not, and as you add information — input information, ingest information — the ANA package, the dozen or so documents that are required by the RMF, generate themselves, which saves an enormous amount of time. The manual effort involved with simply creating the documents, which is just one piece of the process — the automation of that saves our customers a huge amount of time.

Dave Bittner:

Now, on the Telos side of things, as the CSO, what is your day-to-day like running that side of things?

Rick Tracy:

It varies, right? So, some of the things that I concern myself with are, to use a recent example, is the NIST or the 800-171 compliance requirement, which contact holders have to demonstrate that they meet certain minimum cybersecurity standards in order to be eligible to hold government contracts. So there's things like that, and then the GDPR requirements that are coming down, or I guess, they went into effect earlier this year.

One of the things that I've learned is that even if you're not in a position where you have to comply with GDPR because you don't do business overseas or you don't collect non-U.S. personal information, a lot of times, you have to be GDPR-compliant in order for large organizations to do business with you because they are trying to … In general, it seems like organizations are trying to raise the bar, and they don't want to put themselves at risk by doing business with someone who's not GDPR-compliant, even if that organization doesn't have to be GDPR-compliant. If you're not GDPR-compliant, working with these large organizations who are required to be … It's really difficult to get through their term sheet in order to be a partner with them.

Dave Bittner:

What do you suppose is coming along in the United States in terms of our own version of GDPR? Do you think we're headed in that direction?

Rick Tracy:

I think we are, and I think that certain states are taking a lead, like in the past, California, with other privacy standards years ago. I believe that California's introduced something that's similar to GDPR, and more progressive states, just reflecting on how things have happened in the past, like Massachusetts. I wouldn't be surprised if we start to see a state-by-state adoption of something that looks like … If not GDPR, something that looks like GDPR.

Dave Bittner:

Do you suppose that that's the way to handle it? Is that something better handled at the federal level than a state-by-state approach?

Rick Tracy:

From my perspective, it's much better if it's done at the federal level because as a company, if you have employees in all 50 states, and each state has a different standard, it just becomes difficult unless you just choose the high watermark through the most rigorous standard and apply that across the board. Otherwise, what you have are 50 versions of something like GDPR for privacy, depending where the employee lives.

Dave Bittner:

Now, as chief security officer, what is your relationship with your board of directors? How do you handle communications with them?

Rick Tracy:

I have worked with general counsel, internal audit. We brief an executive team every month about security status, and we manage a risk register, if you will, and we report on progress toward the remediation of those risk items at least once a year, if not more frequently than that, sometimes twice a year. There's a status briefing to the board of directors about our risk posture, but it's not the kind of discussion that some people might envision. We're not talking about unpatched systems or vulnerability scan results or things of that nature. It's higher level. The dialogue for the board isn't one about cybersecurity per se. It's more about risks associated with cyber, and from my perspective, based on my experience, when you start talking about risk to the organization, that's a language that the board of directors … It's much easier for them to understand because their job is management, generally. And what we're talking about when we go to see them are cyber-specific risk considerations for them to be aware of.

Dave Bittner:

And so, are you functioning as the translation layer, as the Rosetta Stone between those folks and your own team?

Rick Tracy:

Well, I would like to take credit for that, but actually, the Rosetta Stone for us is the NIST cybersecurity framework because it allows us to associate granular risks. And I'll go back to things like unpatched systems and more technical issues.

The construct of the CSF is, on the far left, you have five functional areas: identify, protect, detect, respond, recover. And then that gets broken into more granular detail through categories that feed those five functions. Subcategories that feed the categories and then informative references, which are very specific controls that you choose to engage to help you demonstrate your cyber risk objectives. What it is that I think is important for my organization to do and what to achieve from a readiness standpoint, and then, cyber risk outcomes. So as you assess yourself against these categories and subcategories that you engage, you have the ability to show this gap between, “What's my objective? What's my actual status?” And then some way of conveying what it is you're doing to bring your outcome to meet what your objective is.

So that's my long-winded way of explaining the value of the cybersecurity framework, because it allows people in IT — I'll say some of the server room — to communicate with people in the boardroom through this structure that the CSF offers, of specific controls, higher level subcategories, and even higher level categories that feed these five functions that allow everyone to be in sync as it relates to cyber risk management dialogue, from the server room to the boardroom.

Dave Bittner:

Now, do you find that that's a complete solution? Are there any areas where you find that that framework might come up a little short for you?

Rick Tracy:

I would say at this point, no. The fact that it's a framework implies that it's flexible, and it allows you to engage controls that are meaningful to your organization, at least at the far left of the framework of “identify, protect, detect, respond, recover.” It's language that you can associate lots of different things to, so, no. I would say that the cybersecurity framework, to this point, allows us to cover our bases in terms of communicating cyber risk objectives and outcomes.

Dave Bittner:

So I want to switch gears a little bit and talk about threat intelligence. What is your take on threat intelligence? Where do you suppose it fits in?

Rick Tracy:

Well, it's interesting that you ask that question because just a few weeks ago, we had a round table with some of our customers, which we do periodically, where we just say, “Beyond the ANA sort of benefits that our technology gives you, what are some other capabilities that we might be able to provide?” And one of our customers said, "Rick, I'd like you to help us with threat-informed risk," and what that means is ... As part of this ANA process, I think I just mentioned a few minutes ago, we help our customers collect lots of information about the assets — what type of assets? Servers, cloud resources, laptops, workstations, hardware, and software associated with those assets. Vulnerabilities associated with those assets. How those assets comply with certain required security controls. And in this particular case, I'm talking about 800-53 controls.

And so, there's this rich set of data that's used, to this point, for one specific purpose, driving this ANA process. The succession is that if we were to bring in threat intelligence data and blend it with this asset information that I just described, it would help organizations understand where there is risk, and that would help them prioritize investments and remediation, instead of treating everything … Just because there's a vulnerability, treating everything as if it's a risk. And so, you know that the definition of risk is “vulnerability plus threat plus impact.” Absent threat or impact — you could argue that there is no risk, right?

So the important thing, and by design, one of the things that the RMF doesn't require, is this active threat intelligence information. You do have to go through, as part of the process … Let’s say I have threats as a result of geopolitical error. This system's being deployed in an area where there's geopolitical unrest or there's hurricane risk or threats of hurricanes or such, natural disaster kinds of things. But there's not a requirement to feed active threat intelligence information and blend it with the ANA data that you're collecting for the purpose of ANA, asset, data, and vulnerabilities in compliance and so forth. To provide this useful threat-informed risk awareness, if you will.

You ask me, what's my take on it? In a vacuum, I think that you could be overwhelmed by data and to know what to do with it, but if there's a way for you to target threat intelligence information that's meaningful, that you know is meaningful to your organization, and blend it with information that's specific to how your systems are configured, I think that there is a way for you to really derive benefit from the combination of those two types of data when you put them together.

Dave Bittner:

So how do you envision that blending process taking place?

Rick Tracy:

It's going to be interesting because I believe that there are many commercial threat intelligence feeds. Many of our customers have their own threat intelligence sources that we probably don't, as commercial citizens, have access to, so what we're going to have to do is create a connector framework that allows for the user to bring in structured threat intelligence data, so that our system can map it appropriately to the other type of asset information that I mentioned.

We are in the process of creating a schema that allows us to associate threat intelligence data. So we'll say to you, “You can bring it in, but it has to be in this particular format in order for it to work with the other data that you already collect for your ANA process.” So it's going to have to be flexible because it's not a one-size-fits-all in terms of threat intelligence data. Depending on the organization, the source of that data is going to be different, but that's always a challenge. That's something that we're prepared to deal with as we rebuild our platform, our Xacta platform, from the ground up, that will allow vast amounts of information and various formats streaming flat file, and so for it to be brought into our platform and mapped appropriately to do various things.

The thing that we're talking about right now is correlating threat intelligence information to your system assets and resources to determine if there really is risk or not, so there's not an easy answer. There's going to require a good bit of design engineering on our part to deal with the flexibility that's going to be required, based on where our platforms deploy and what the threat intelligence feeds are that they want to use.

Dave Bittner:

Yeah, I would imagine one of the challenges is, as always, dealing with information overload and transitioning that intelligence to make it so that it's actionable.

Rick Tracy:

Yes, and that's the point, about organizations knowing the threat intelligence information that's useful to them. Lots of organizations just subscribe to a threat intelligence feed. As you just said, they're going to get overwhelmed with data, that they have to figure out how to relate it to their organization or determine whether certain types of data aren't relevant to them. In some ways, it becomes a lot of work in order to get the benefit that you're looking for. The point of allowing organizations to connect the threat intelligence feeds that are meaningful to them, is that there'll be less of that being buried by data that's not useful to them. They know that this data is the exact type of data that they're looking for. They want to relate it to their IT environment to highlight risky situations.

Dave Bittner:

Now, what's your advice for the organizations that are trying to get a handle on this? Maybe they're just starting out, and they're trying to figure out how to approach it. What would your words of wisdom be?

Rick Tracy:

Like everything else, I would say, don't try to do too much too soon. Take it a step at a time. Build on the success that you realize, based on these incremental steps. You have to start someplace. An example, and another thing that we're working on, something that might be beneficial to a lot of organizations — here's an example. One of the things that we advocate is to … When I say assets, most people think IT assets, but if you broaden your view of what “asset” means, and you ingest data from your LDAP system, to understand what people fulfill what roles, have access to which systems, are assigned which IT resources. And then you bring in data from phishing simulation companies or phishing-type research companies that let you know that this type of phishing exploit was just seen and it's targeting the CFO role in the organization.

That's an example of threat intelligence mapped to asset and configuration information that would be useful to lots of organizations. You can easily see the correlation of that information — a known phishing exploit, a CFO role within the company — there's extra care that needs to be taken to ensure that that person's informed of this type of a phishing exploit or some other automated mechanism to block it at the proof-point border device. So I think that taking things a step at a time, proving that they work, and building on them, as opposed to just trying to boil the ocean — “I'm going to take in massive amounts of threat intelligence data because it's going to help me,” only to find out that you're going to be buried by it — you're going to have a lot of people working to figure out what's relevant and what's not.

Dave Bittner:

Our thanks to Rick Tracy from Telos for joining us.

Don't forget to sign up for the Recorded Future Cyber Daily email, where every day you'll receive the top results for trending technical indicators that are crossing the web, cyber news, targeted industries, threat actors, exploited vulnerabilities, malware, suspicious IP addresses, and much more. You can find that at recordedfuture.com/intel.

We hope you've enjoyed the show and that you'll subscribe and help spread the word among your colleagues and online. The Recorded Future podcast team includes Coordinating Producer Amanda McKeon, Executive Producer Greg Barrette. The show is produced by Pratt Street Media, with Editor John Petrik, Executive Producer Peter Kilpe, and I'm Dave Bittner.

Thanks for listening.

Related