Optimizing the Intelligence Cycle at Optum
June 11, 2018 • Amanda McKeon
Our guest today is Vince Peeler. He’s the manager of cyber intelligence services at Optum, one of the largest healthcare and services providers in the world. He shares his unlikely journey from a career as a naval aviator to cybersecurity, and how lessons he learned in the military help inform his approach to cyber threats today. We’ll also focus on the intelligence cycle, and the role it can play in organizing and focusing the efforts of cybersecurity teams. He offers tips on integrating threat intelligence, and making the most of automation to enable your analysts to maximize their effectiveness.
For those of you who’d prefer to read, here’s the transcript:
This is Recorded Future, inside threat intelligence for cybersecurity.
Hello, everyone. Thanks for joining us for episode 60 of the Recorded Future podcast. I’m Dave Bittner from the CyberWire.
Our guest today is Vince Peeler. He’s the manager of cyber intelligence services at Optum, one of the largest healthcare and services providers in the world. He shares his unlikely journey from a career as a naval aviator to cybersecurity, and how lessons he learned in the military help inform his approach to cyber threats today. We’ll also focus on the intelligence cycle and the role it can play in organizing and focusing the efforts of a cybersecurity team. Stay with us.
My journey started in the Navy and in aviation. Since I was a little boy, that’s what I wanted to go do — fly jets in the military. I thought it would be the Air Force, but the Air Force didn’t need anybody. The Navy did, so that’s how I ended up in the Navy. I’m glad that I did. I had a great time there. I spent roughly 14 years in aviation. Every jet I was assigned — originally, it was 73B Vikings, and then later on, EA6B Prowlers — kept going away, so I decided, maybe I should go someplace that had a more secure future. Somewhere where I could kind of grow my experience and use this collection ability that I was gaining in those platforms and kind of … What else can I do with that?
That’s kind of how I went into the intelligence side. The Navy — with great timing — the Navy came in with what they called an “information dominance warfare officer,” so that’s what I became, with a specialization in intelligence. That’s kind of how they … They brought together, into one community, the cryptographic information warfare officers, the traditional … What we call IPs, or information professionals. That’s your traditional IT folks, and meteorology, all under one roof kind of thing. One community.
We had to do basic trainings across those. I started out fairly senior in the intel world so I quickly moved from analyst, or watch officer, into leading intelligence. That’s kind of how I fell into this position I’m in at Optum, but I first got started in the cyber realm. Well, for whatever reason, I was always the guy that ended up being in charge of IT systems on the aviation side. I used to laugh about that, because my undergrad is psyche, and I was sitting amongst with all these engineering majors and they’re going, “Well, you be the computer guy.” Like …
Right, because of course, yeah.
Right, exactly. Like, you’re the engineering major — some of you are even computer engineers, you know, like, engineering majors. Nope, nope. They’d had enough of that, they wanted to do something different. So I’m like, “Sure, I could just do it.” So I did. I learned a lot, I enjoyed it, but that led me to understand … I could just see the changes. What I mean by that, really, is what I call my “first real intel job.” It was in counterterrorism at Syncom headquarters.
We deployed forward with special operations forces, and that’s kind of where I really … What I call growing up in the intelligence community. I feel very fortunate to have had that position. Not only did we get to lead things back at Syncom headquarters — and that’s where the cyber bit came in because they were like, “Okay, we have all these different threats, what should we do?” I was kind of the advocate for setting up a cyber media team, and we did. I became the OIC after I came back from my forward deployments. We looked at the terrorist threat through cyberspace.
At that time, it was primarily a lot of influence-type stuff. That was back with Inspire magazine with Anwar al-Awlaki and Samir Khan. Following those target sets really gave me an appreciation for, you know, this kind of … You’re tracking people. That was the first time that that really sunk in. I mean, we kind of talked about it when I was doing aviation, especially in the S3s, we were primarily trained to track submarines — again, a target you don’t see. But it was still a sub. You didn’t really think about the people, initially, and I was younger. I didn’t really think about it back then.
On the terrorist side, it really was about people. That’s one of the things that I think cyber intelligence really brings to cyber defense. What I tell people at Optum is, I’m here to talk about the people on the other side. The adversaries themselves. Yeah, the IT part is a tool for them, but the tools are used by the people and that’s kind of … I was just having a conversation with some of our leadership about this at Optum not too long ago. Just about how it’s interesting because the tools that are there get modified and used differently, and there’s always some new way of doing it.
That’s not based on the tool. That’s not based on the design of it. It’s based on how a person can interpret and use that tool to get around defenses. That’s what we find really interesting. We could have a good conversation on that. That’s the kind of stuff that’s exciting about this field. Now, coming from that large headquarters kind of thing and then moving … Really, my last job in the Navy was monitoring ISR operations for manned ISR EP-3 operations globally. Running disperse teams against a huge target set. It really sunk in to leading management, and that’s kind of where we’re going to lead into with this. It’s managing that intelligence.
Now, there’s the cycle, and everybody kind of talks about the intelligence cycle. I see it as a management tool, not an operational tool, if that makes sense.
I see it as, you’re building an intel team, and these are the elements that you need to concern yourself with. But on a daily basis, I don’t go, “Well, now we need to move and proceed into the collection phase.” You know, you don’t really think about it like that. It’s just the daily stuff that you do. But as I’m looking at, how do I manage a team, I look at the different elements and say, “Do we have something that covers this? How are we going to proceed with the requirements, with the collection, with the analysis? Are we using the best analytical methods?” Those kinds of questions.
Really, what I’ve been focusing on lately is, how do we get better dissemination. Yeah, there’s a lot of automation that can happen, and that’s where we’re looking into doing a lot more of that, but how do we get people to really understand what it is that we’re saying? We can automate tools, we can automate that flow of data, but when it comes to leadership and the higher-ups making strategic decisions, how do I make them understand what the threats are and where we’re vulnerable?
How do we make that digestible? One of the ways I was looking at it is with micro-learning, and taking some of what I learned — kind of a weird background, but my masters is in learning technologies — and building learning environments. I kind of equate intelligence as a learning process when you go to the dissemination phase. Also, in the analysis phase with the analyst, but really, I see it as a learning process for whoever it is that you’re giving it to, because there’s different levels.
There’s the network defenders, like the SOC analyst. There’s leadership trying to understand what the threats are so that they can make budgetary decisions, possibly on, “Hey, where are the gaps in our defenses? Is there something new out that we need to really be concerned with?” But how do you get it to them? Typically, most people default to words, whether they’re on PowerPoint, or a Word document, or a PDF, or on a website of some sort. Yeah, you’re pushing it to them, but if they’re not really pulling it in, sometimes you get asked the question that you just published something on a day and a half ago, right? You know that it’s not being ingested sometimes because of everything else, and it just kind of becomes noise in some regards.
How do you break through that? Those are some of the things I’ve been thinking about lately, is how do you make more bite-sized chunks that are easily digestible for folks? Whether that’s in smaller products, or maybe in videos, or like this podcast, or something that’s in that couple-of-minute framework, that micro-learning type event. Those are some of the things that we’ve been kind of discussing on a way of, how do we move forward with that, and how do we get beyond the Word document or PDF, or the brief on PowerPoint?
Well, I want to dig into some of the details about the intelligence cycle. Why don’t we back up a little bit and just sort of describe what is entailed in that. You mentioned some of the steps, but can you take us through the five steps and what they mean?
Yeah, I sure can. The biggest one is going to start off with your planning and direction — your requirements. Now, this can be somewhat tricky — especially for organizations like corporations — to understand, because what you’ll end up getting is, “I want intel.” That happens a lot, from what I’ve been able to gather from talking with folks across the industry a little bit. Different groups and stuff. But, that’s what drives everything. Why do I say that? Because I’ve seen this, whether it was in the intelligence community in the government, or some of the conversations early on at Optum, it was numbers of products.
I don’t necessarily care about the numbers of products. What I care about is fulfilling the requirements. What is it that we need to do? That’s where this whole piece fits in, really. It’s what I call requirements-based metrics, because they love metrics in corporate America. So, I tell them I need a requirements base. Did we fulfill the requirements? What did we do to fulfill them, and how did we do it? Like, types of products or something — more than just a conveyor belt number. Because if you want to go to a conveyor belt, we’ll produce a lot more reporting that won’t get read, and it won’t mean anything, but our numbers will look great, you know.
Moving into the collection piece, this is where you gather your data. What sources are you going to use? This is also where I like to talk about source validation. Not all sources are created equal. Some sources are better than others, but how are you going to collect that? I’ll throw something in that we’re working with Recorded Future on, and that is, we’ve made the collection requirements, the alerting through Recorded Future, and with the help of your analyst, we are kind of tailoring those alerts to the requirements. Because we have them numbered, so now we get alerting that is based … When you get it on an email alert, it satisfies this requirement. We’re trying to tie that back to who’s supplying the data for us. Some of that validation process begins there.
Then, you’re moving into the processing and exploitation phase. This is where you’re trying to make sense of all the data that you’ve got. As we’re seeing, there’s lot of commentary around … We’re all drowning in data, so yeah, that’s true, and how do you make sense of that? So, that’s where that step is. Then, there’s the analysis and production phase. This is where you’re making those products that we were talking about, no matter what kind of products they are. Really, some of the analysis and production … If it’s just information like IOC feeds and stuff, you can automate some of that and it kind of skips through some of the analysis. When I think of analysis, I’m thinking intel analysis, where you’re using structured analytic techniques.
Then, you have the dissemination and integration phase, and that’s what we were talking about earlier, about getting the information to the right people at the right time to make the right decisions. Then, from there, is the feedback and reevaluation phase.
Now, some people will talk about how feedback happens across the cycle, and I kind of agree with that. Since I don’t see it as a “step one, two, three” kind of cycle — but it’s a management thing — I don’t typically involve it at every step. I just know that it happens at every step. I just want to make sure that there’s a feedback method at each step. It kind of hangs there on the evaluation point, and I just kind of make sure, like I said. I just use this as a, “Hey, do we have what we need to make a really solid intelligence team?” That’s kind of where that happens.
Those are the steps, traditionally. Some models change it, and some of them just go straight from collection to analysis. Sometimes the processing is broken out and separate. I kind of like to break out separately in the cyber world, because of all the automation that we’re trying to get through, we can weed through some of that IOC-type stuff and really get to the analysis phase.
Can you go into that a little more? I’m particularly interested in how you dial in the balance between automation and using your human assets.
Here’s the tricky part. There’s an unlimited number of IOC-type threat feed data that you can get. What we’re trying to figure out is the trick of how you can make that the most automated. We’re doing some of that. We’re looking at ways of increasing that, with the help of some other types of systems. We have a whole team for automation, and that’s some of the stuff that we’re working through now. Typically, what we’ll try and do is use the automation and try and have them integrate known threat actors that we’re looking at, or who are known to attack — for our case, healthcare. Not looking at other ones or older ones.
Now, there’s a big debate on, what’s an old IOC? There’s a … You ask 10 analysts, and you’ll probably get 10 different answers on what constitutes an old IOC. Some folks are in the camp of keeping them forever. I was originally somewhat in that, just for historical purposes, until I finally realized over a year or so just how many we get, and then it ends up being too much. Especially in some of the sensor systems. You have to kind of figure out a balance between keeping some historical ones for reference that you can track — because I like doing analysis over time — so you can see trending and progression. That kind of helps me think about what will happen in the future.
But at the same time, you have to really determine how much your systems can handle. And what we found out is it can handle quite a bit, but at the same time, it slows everything down. Maybe it’s just our systems — I don’t think so — but you try to keep them sharp. We’ve kind of fluctuated on our thoughts on how long to keep them. It seems to be one of those moving targets for us. I kind of agree. As things change over time, and your automation changes and your systems change, I think the best policy is to keep revisiting that. I don’t know if everybody else’s systems change, but ours … As we add more things and try to add more capabilities, I think going in and looking at that is a good thing. It takes time on the front end but I think it helps alleviate some of that time on the back end.
What do you have to deal with … I suspect, because you’re in the healthcare environment, that you also have to deal with a lot of regulations?
Oh yeah, lots of regulations in healthcare. The compliance side seems to be pretty huge at Optum. Luckily, I don’t have to worry with audits and stuff too much. Sometimes they ask some questions that … Somebody will come ask me about how we deal with certain threats or something of that nature, but not typically — thank goodness. But there are lots of different types. There’s HIPAA, of course, and there’s the standard PHI.
For the most part, that has a lot to do with reporting, and when do we report, we were breached in some way. Of course, the department of health and human services has their wall of shame, so that’s the big thing. That’s a big driver, right — to not be on the wall of shame. But it also makes a good reference point as to how breaches are occurring, and that’s something else that we do track. It sounds kind of funny, but at the same time, it’s not. Healthcare is targeted a lot, and I’ve thought about this quite a bit. Here’s what I kind of … There’s no real hard facts on this. Just talking to folks.
People in healthcare have a hard time believing that they are as big of a target as they are, because people get into healthcare because they want to help people. It makes it hard for them to understand that other people aren’t like that. I think the focus is on helping people, not trying to perform security. Not that they don’t use security, or don’t like security. It’s just that, it’s hard sometimes for healthcare workers to wrap their head around just how — for lack of a better term — bad people can be.
Yeah. I’ve heard often that when it comes — on the healthcare side of things — that if there’s a security practice that gets in the way of patient care, that security practice is out of here. A surgeon in the operating room … If somebody’s going to slow him or her down, their priorities are clear. That’s a really interesting insight coming at it from your direction.
Yeah, and it’s the same kind of … Optum’s such a big company. It’s very interesting to work there because yes, it is healthcare, but we also have a bank, so we do a lot of financial as well. Our bank isn’t retail banks, it’s HSAs, so we hold … I heard that we are the biggest holder of HSA accounts in the nation, and we’re global, so it’s never …
That’s a lot of bullseyes to have on your back, right?
Exactly. I mean, if you want to do healthcare in South America, we probably are touching you in some way.
Places like Brazil, Chile, Colombia.
Yeah. Well, let’s switch gears a little bit and touch on threat intelligence. How do you dial it into the work you do there, and what is its importance to you?
I kind of feel like I’m an advocate for intelligence at Optum. What I mean by that is, a lot of people — especially the more traditional IT security — seem to see it as, it’s an IT issue, it’s a network. So, if we block it in one way, we’ve blocked it. That kind of goes back to my comments earlier about how it’s about a person, and how they will use malware or exploit a system. I’m always talking about the people. We have to worry about the people aspect of it.
That’s kind of what I tell them, because all the time, they’ll ask me things about the internal network and I’ll say, “There’s another team for that.” I’m not the expert on that. I’m the expert on the adversary and what we’ve seen and what we think we may see in the future. What I tell them is, I’m here to help define what the gray area means. What I mean by that is, there’s a bunch of unknowns, because we’re dealing with people. If we’re dealing with a piece of malware, if it runs as it’s designed, you know what’s going to happen.
But it’s that person. How are they going to use it, how are they going to modularize it and add other things? We’re looking at the behavior of people, and when you do that, you’re talking about courses of action, if you want to bring it back to the DoD parlance. It’s helping to find what may be. That’s kind of what I see our job as, and that’s what I say, that it’s … We’re here to help define the gray area.
I had another analyst ask me once, “So, you guys are never 100 percent sure about anything?” I say, “That is correct.” Because if we were sure, it would just be a fact, and then it’s not really intelligence because you don’t need analysis to define … you know what I mean, like a fact. We’re there to help understand what that gray areas are, and what it may mean. That’s kind of how I see it. We do have analysts that are really diving into vulnerabilities in malware and how things are being exploited.
Now, sometimes, that is not necessarily a person. It’s just not all about people. I kind of equate it to most of the intel — especially on the DoD side — that it’s really about your average. A lot of the time, it’s a terrorist network or a foreign country, but that doesn’t mean that you’re not looking at the weapons systems. I see the malware as a weapon system. You kind of default back to that, right, because it makes sense and that’s how I see it. Most of the team came out of DoD, so it’s easy for us to wrap our heads around it that way, and that was not by design. It’s defaulted that way.
It’s an interesting analogy because, to go back to your history, it seems to me like if you’re tracking that submarine, you need to know not only what kind of submarine it is, but what weapons it carries and maybe even the style of the captain? What tactics does the captain prefer or subscribe to? That can vary from submarine to submarine, right?
Right, exactly. I have an actual … What we would in the Navy call a “sea story” on that. When I first joined the Prowler Squadron, we went to a leadership program in Europe, which is basically a red flag exercise with NATO. Most of NATO at the time were flying F16s, but the F16s … I mean, they’re all the same, pretty much. Slight variations and different types of radars, or slight weapons system load-out differences, but for the most part, they’re the same. Then, we did a strike into England one day because we would plan strikes and go do these strike missions.
I’m not going to say which country, but they were asked … It was kind of foggy that day, and they were asked, “Why didn’t you attack? Like, you went in but you didn’t drop any bombs?” They said, “It was foggy.” The UK commander was acting like he was the big general in charge, and he said, “You have an F16. It has radar.” They’re like, “Yeah, but we don’t train to that.” That goes on to show that everybody else did. They didn’t because they didn’t train to it, even though it’s the exact same airframe as everybody else, pretty much.
Right. Different people using the same tools in different ways.
Correct. That kind of … From the DoD aviation side to this, but the same thing. What threat actor is it and how are they using the malware or the exploit? That’s in my mind because of those experiences. That’s kind of how I see it bringing it forward into the cyber world.
I think that’s a really interesting insight, and it’s a really interesting analogy. What are your recommendations for folks who are trying to figure out how to best integrate this notion of the intelligence cycle into their organization? What do you recommend in terms of getting started, and keeping the whole thing under control and manageable?
The insights that I have from the organization, and from doing this from the outside is … Go look at that cycle and those processes. If you don’t really know or understand them, you can go to Google and pull up JP 2.0, which is a joint publication. 2.0 is the DoD manual for joint intelligence. It defines all that, so that’s a good starting point, just to go see the cycle and see the points. I wouldn’t use it as the Bible, right. This is … It’s a cycle like that — they call it the cycle. Actually, I think it was sometime around 2010 — I don’t remember exactly when, I’m forgetting — they started calling it the process, not the cycle.
Just know that hey, those are good points. If you have capabilities along that cycle, you don’t have to follow it step by step. You can flex out of that a little bit, but as long as you’re covering those elements, you’re safe, kind of thing. Not 100 percent, you know what I mean — like, you have to make sure that you have it well developed. You have to look at your own organization. I will say, from a learning point, from us at Optum, I think we started too strategic. It was hard for folks to understand what we could provide, because we were talking from too high up … From a technical, operational, and strategic standpoint, we were writing more at the strategic operational level.
What do I mean by that? We’re writing about adversaries, but your SOC person, your analyst, is saying, “How does that relate to me?” Okay, China hacked into a healthcare entity, so? What does that mean to me on a daily basis? What I would say is, maybe you should start tactical first. What do I mean by that? Coping with the SOC analyst. We made great strides from that when we started embedding analysts at the SOC. Working side by side, going, “You know what, that alert you’re seeing, that looks like it’s tied to this current campaign. Let me pull up some more data for you to go look for, and look for these other indications, because this is probably what they’re trying to do.” That has, by far, allowed us to really integrate into operations.
Now, from that, then you can step up and start talking more about the different threat actors and the campaigns and how they do things. Once you break it down technically for them, and they can see how that translates, that seems to help them understand the higher level, is what I call it. From the operational and strategic side, because you’ve already showed them how it transfers at the technical level. I see threat intelligence, also, as a knowledge management problem. I think a lot of folks tend to see it as an aggregation problem. What I mean by that is, if you look at different types of threat intel platforms, no matter what, they are pretty much … They all kind of talk about how they could aggregate all your different threat feeds into one.
Well, that’s good, but that’s kind of, to me, only hitting the collection phase, and maybe that processing phase. But that’s not helping me manage the rest of it. I call it more of a knowledge management problem, because it goes back to the earlier comments, when we were talking about how you get the information, how you process it, and how you make sure that folks can use it when needed and understand it. That’s what I mean by that.
It’s about also knowledge creation, in that intel is just a specific type of knowledge. It helps you understand it if you don’t break it down into an aggregation problem, because initially, what we found at Optum was that folks thought we managed IOC feeds. That’s intel. That’s not really intel, that’s a function. We do some of that, but that’s not all there is to it, you know.
Our thanks to Vince Peeler from Optum for joining us.
If you enjoyed this podcast, we hope you’ll take the time to rate it and leave a review on iTunes. It really does help people find the show.
Don’t forget to sign up for the Recorded Future Cyber Daily email, where every day you’ll receive the top results for trending technical indicators that are crossing the web, cyber news, targeted industries, threat actors, exploited vulnerabilities, malware, suspicious IP addresses, and much more. You can find that at recordedfuture.com/intel.
We hope you’ve enjoyed the show and that you’ll subscribe and help spread the word among your colleagues and online. The Recorded Future podcast team includes Coordinating Producer Amanda McKeon, Executive Producer Greg Barrette. The show is produced by Pratt Street Media, with Editor John Petrik, Executive Producer Peter Kilpe, and I’m Dave Bittner.
Thanks for listening.