How to Keep Finished Intelligence Fresh
Our guest today is Storm Swendsboe. He’s an analyst services manager at Recorded Future, leading a team of intelligence analysts providing on-demand reports for their customers. In our conversation he explains the different types of reports his team provides, with a focus on finished intelligence. Swendsboe answers questions like, where does finished intelligence fit into an organization’s threat intelligence strategy, how it can be customized for specific audiences, and how to make sure a report doesn’t quickly become out of date the moment it’s published.
For those of you who’d prefer to read, here’s the transcript:
This is Recorded Future, inside threat intelligence for cybersecurity.
Hello everyone, I’m Dave Bittner from the CyberWire. Thanks for joining us for episode 64 of the Recorded Future podcast.
Our guest today is Storm Swendsboe. He’s an analyst services manager at Recorded Future, leading a team of intelligence analysts providing on-demand reports for their customers. In our conversation, he explains the different types of reports his team provides. We focus on finished intelligence, where it fits in an organization’s threat intelligence strategy, how it can be customized for specific audiences, and how to make sure a report doesn’t quickly become out of date the moment it’s published. Stay with us.
We’ll walk through three different types of intelligence. Now, there’s a variety of different ways within the industry that people sort of delineate that. I’ll walk us through two different ways of looking at that. Probably the first and most common one is finished intelligence being a finalized product that can get pushed out one way or another — something that has gone through peer review. There’s been some sourcing associated with it, and then we publish it further out there. Then there’s also a little bit more of an in-depth way of looking into it, that is, within delineations of how a finished product might be. The way that that’s done is, we’ve got three different types of intelligence.
Very broadly speaking, there is flash reporting, there is current intelligence, and then there’s finished intelligence. Now, the first one we’ve got there is flash reporting. As the name suggests, it’s supposed to be a very quick report. On my team, we do these in about 24 to 48 hours. These reports are designed not necessarily to be as thorough, but to take a number of sources, conduct analysis on them, and then push them out. In a flash report, we don’t necessarily have the time — because of the severity of the issue — to actually go through and double or triple-source things, or to assign confidence levels to everything.
It is something that, with a low confidence, we want to provide incessantly to a partner or a customer. We’ll put together a very quick flash report and send that off to somebody. Now, the next step up is a current intel piece. Now, this has got a slightly higher confidence level. Normally, these take a couple of weeks or so to produce. In certain government agencies, these might even take a couple of months. One of these reports is going to be something more along the lines of a report that is going to be a lot more in depth, like an actor profile or a tool profile, where the analyst can actually go in and say, “Alright. I have gone through the primary sources for this. I’ve verified them. I’ve got confidence levels assessed or assigned to them.” We can go through and look at that. There’s second-tier and third-tier sources. It has been reviewed by a peer. It has been reviewed by a senior-level manager, and possibly by an editor as well, and then it’s been produced. Now, the confidence that is associated with a report like this is generally going to be a lot higher. Then, going into the last one, which is finished intelligence, which is where a lot of the stuff that we pull in for our blog sort of ends up.
This is where an analyst goes out, makes assessments, finds sources to support or to challenge those assessments, runs through all of that data, runs through those sources, compares and contrasts the confidence that’s behind it, and then poses that out to the company. The analyst submits the draft document — literally about half of the company has access to that when it’s in its draft form — and tons of people will go in there, provide comments and feedback, and challenge things that are within it.
Because of that, the assessments that come out are generally at a higher confidence level, or at least the confidence assessment language itself will be a little bit more definitive, in the way that we talk about those items. Those are the different types of ways you can look at finished intelligence.
Now, when we’re talking about finished intelligence, are there various types of reports that are generated? What’s the variety within that category?
Within that category, you can go into a variety of different things. As I mentioned, there are things like actor profiles or tool profiles, where you’re trying to make assessments in regards to, who is this actor? What’s their background? What’s their history look like? What attack methods do they use? What’s their history on their forums? Do they have a good reputation where they’re operating it? Taking that one step further, and then looking at the threat assessment about that is, how much of a risk does this pose to our company or to our customers? Does this pose some sort of change within trends? Is he developing something new that might affect other things?
Yeah, that’s an actor profile. For a tool profile, we would go through … Take apart a piece of malware, or at least detonate a sandbox, possibly do some reverse engineering on it, and pull out the IOCs that are associated with it, see what actors are involved in using it, what its general use case looks like out in the wild, and then pull through the threat assessment of how much it’s going to be affecting our customers, as well as our company itself. That’s one tier. Then, you can get into some more trend analysis. Where do you think trends are going within the next quarter?
Now, this can be associated with trends in regards to threats to an industry, or trends in regards to an entire motivational vertical, such as trends within cybercrime or within cyberespionage. Additionally, you could also look at trends within a country, so, whether attack methods being used within a certain country are trending upward or downward, and then, what that means for your company and your business. Then, further on down the road, you can get into full-on, focused intelligence reports where you look at an entire vertical. You look at the threats that are directed toward your company, toward your industry in general. You look at the trends that are associated with that.
You pull out who the primary actors are and then you sort of make an entire threat landscape for things that you, as a company, have to be concerned about. Not just directly against your own company, but with peers that are within your group.
Comparing and contrasting the finished intelligence with the flash briefing, or the current intelligence … There’s really a temporal aspect here of, I guess, both having the time to be reflective on what’s going on, but also to be more forward looking.
Exactly, exactly. There’s generally a more strategic aspect with some pieces of finished intelligence, although you can still do that within a flash report or within a current intel piece. The only difference there is going to be the confidence assessment that you have alongside it and the amount of work that you can put behind, say, the trend analysis that’s associated with it. Even then, still, one of those pieces can still have good assessments. They can be solid. There’s no ding against the type of analysis that’s being done there. It’s more the amount of sources and the amount of certainty that you can have behind the analysis.
Can you give us some insight into the type of work that your team does? I know you can’t go into a lot of specifics with the clients that you work with, but can you give us a sense of what your day to day is like?
I’m the manager for the analyst on-demand team over here at Recorded Future. Our purpose is, we are here as an extension of our customers’ teams. Customers can come to us and say, “Hey, we just had this event happen on our network. We are working on mitigating it, but we need a report that we can share around with upper management and with other departments on this piece of malware that we found on our network. Can you go write that report up for us?” That might even just be a quick little flash report for them. We’ll write things that are very specific to a customer’s needs and then deliver that to them on a timely basis.
While we can’t talk too much about the reports in specific, we can talk in more general terms. On our team, we do a variety of different types of reports. We will do everything from a flash report, all the way up to a finished piece of intel. The delineations we’ve got on our team is, we’ve got flash reports, which are 48 hours or less, current intel pieces, which are more of a deep dive — these will generally take us about 10 days or so to produce — then we’ll have focused intelligence reports, and those are kind of our big finished-intel-type pieces where we’ll spend about a month working on a topic or a project. Normally, its broad scoping misses a lot of the sort of mystery-style reports that we put together.
When it comes to finished intelligence, I’m thinking about how many of these malware campaigns are ongoing, and they evolve over time. It’s strikes me that a piece of finished intelligence is a snapshot of a specific period of time. How do you deal with the fact that after you publish your report, things may change? Do you have addendums? Do reports get updated over time?
Actually, that is a very good question, and one that throughout my career we have had issues with. Because as you mentioned, yeah, things can go out of date very quickly. You might write a malware report on a tool that you found somewhere, and it could technically be out of date as soon as you hit enter and send it to the customer. One of the ways that we sort of work around that problem here at Recorded Future — and it’s very unique to us, partly because of the toolset that we have — is we link everything that we write about to sources with Recorded Future. For example, we’ve created a search that’s designed to track trends across, say, a certain TTP targeting an industry.
We will share that within our reports with our customers. That way, a month later, two months later, they can read our baseline report that we’ve written for them and go click on that link, and it will pop up as, “Okay. Here’s the most current, up-to-date information,” and they can actually go see that. Our reports end up — because of the Recorded Future augmentation — being more living documents than they otherwise would be.
How does the work differ when you’re creating things for a private sector company versus someone in the public sphere — a government organization, or something like that?
Generally, it doesn’t change too much at all. The reason for that is, a government customer and a private sector customer still have mostly the same needs. Now, if you’re talking about doing analysis on, say, an APT group or working on something that might be classified, then okay, there’s another bag of things that goes alongside that. But the needs and desires of most customers are generally the same. That kind of comes down to the whole needs and desires for a piece of intelligence in general. That being, a piece of intelligence is supposed to at the end of the day be something that you can act on in one way or another.
One of the ways that we as a team … This is something that’s kind of unique to any on-demand analysis service team. There’s a specific way that we can go about making sure that we’ve got the right requirements, to make sure these reports are actionable. In some spheres, you have to sort of guess what the intel requirements are, but because our customers come to us and say, “Hey, we want you to write a report on something,” that gives us really good direction, like, “Okay. This report is going to be useful for our customer and they have a desire to act on it.”
We then reach out to a customer and scope out that report with them, so we try and make sure that what we’re writing isn’t just a report that’s getting written like what you would get within the news or on a normal website. We want to make sure that a piece of intelligence that we’re writing for someone is actually going to be used at the end of the day. Ways we go about doing that is, we’ll ask questions about who the audience is, what the goal is at the end of this report, and then, if they have any additional information they want to share with us that they might not otherwise be able to share through email communications.
We’ll work backwards along that list. First off is that whole context idea. A customer says, “Hey, I want a report on Group X.” Okay, cool. We could simply go out there and write a report on Group X, but what we really want to know at the end of the day is, “Why?” Was there a reason that this actor came up on their radar? Is there a reason that they need to have a report on this? When we get on a phone call with a customer, that normally is one of the first things that comes out. They’ll tell us that they saw this actor doing something over here. They’re concerned about that. Now, we’ve got a little bit more context.
We know that that’s an incident specific to that actor that we need to address within the report. It helps guide us toward using our time more effectively, but also, knowing exactly what matters to the customer. Then, going further along that line, we try to figure out what the goal is, whether it’s a strategic goal at the end of the day where they’re trying to allocate resources based on threats, or whether it’s very tactical and they’re just trying to simply mitigate risk from this one event or this one actor. Knowing that helps us also design reports — sort of, our end conclusions — to cater to answering one of those two questions.
Then, the last one is, “Who is this report getting written for?” Which might sound like a trivial question, but it’s extremely important within the world of intelligence report writing. The reason for that is, a report that’s written for somebody in the C-suite is going to be written very differently from a report that’s for a guy in a SOC. Up in the C-suite, you’re going to be primarily concerned with what the bottom line implications are, making sure that everything is short and to the point, and that no time is wasted, because somebody in the C-suite is generally not going to read a 40-page report on something.
Making sure that everything you need to say is right up front for that customer, then giving action items very quickly within a report like that. One of the ways that we do that in all of our reports is, we’ll have executive summaries at the top. We’ll have whatever our key assessments or our recommendations are — we’ll pop those in bullet points right underneath the executive summary. That way, if an executive gets the report and they only have the time to read that first page, they’ll know everything that they need to know from that entire report.
Then, on the flipside of that, you’ve got the SOC audience. Guys who are in a SOC are primarily concerned about, “Okay, what are the indicators that I need to block, or what is the behavior that I need to be looking for on my machines that might indicate we’ve been compromised by this piece of malware, or by something that this actor is doing?” Those reports are going to be a lot more technical, but they’re also going to have appendices that are very easy to use in the SOC world for, “Okay, here’s an indicator of IPs that you should be either monitoring for or blocking right now, and here are a bunch of malware actions that are associated with this malware campaign or this actor.”
Giving them those resources … Once again, if they don’t have much time, they can simply flip to the appendix. Say, “Okay. Here are all of the lists that we need. I’m going to just copy and paste this and throw this into one of my tools to monitor for this stuff on our network.” Then, smack down at the middle, we’ve got the folks who are our counterparts over on the customer side, which are the other intel teams. Those guys are very concerned about the stuff that’s in the middle. That would be, how we got to those assessments that we’re presenting to the C-level executives and how those indicators are actually associated with the malware and with the actors that we’re talking about.
Out of, let’s say, a 10-to-15-page report, the first two pages are targeted toward the C-suite, and the last three or four pages might be targeted toward the SOC. All of the stuff in the middle is to convince or to verify to the intel analysts on the customer side that the assessments we’re making are actually backed up by multiple sources, by different pieces of evidence that we found, and that our assessments are actually sound.
Now, how does it work internally, as you gather this information and as these reports are built? I mean, you must be building a huge library of all of these things that you can draw on, build on, and reference so that there’s not a lot of duplicated effort going on within the organization.
Some of the stuff that we do, if it’s not customer sensitive, we do try to pump back into the product. Some of the shorter summary reports that we do, like for example, our weekly reports, after we’ve published those to our customers, it takes us a little while to pump that stuff into the product. We’ll generally try to put that stuff back in there as historical knowledge. Eventually, for some of the non-customer-specific reports that we write, we’ll try to pump that information back in there as well. On the flipside, on the internal side, we keep certain drives available with our reports.
They’re easily searchable. If a new report comes up, we generally talk within the various intel groups that we’ve got over here and we make sure that we’re working and coordinating with other folks, so we’re not doubling up on effort.
Now, from a leadership point of view, with a team that you work with, you have a lot of technology there. The tools that you use can gather lots of information. You’re using artificial intelligence and machine learning. It strikes me that at the core of all this, you have analysts who sometimes just get a feeling that maybe something isn’t right. How do you provide the freedom for an analyst to chase after something? When you might not know what the answer is going to be, but somebody comes to you and says, “I’ve got a funny feeling about this. Is it alright if I take some time and look into this?”
Yeah. While we do primarily use the Recorded Future tool and a couple of other tools we’ve got, I highly encourage our analysts, especially for anything that … Even for our flash reports, to go outside of the product and make sure that we’re not missing stuff. In the cases that the product is missing something, we circle back around afterwards with our collections department and we make sure that we’re collecting on that. Anytime that there is something that we weren’t collecting on that’s actually relevant to our customers, if we find that outside, we make sure we come back and type it back in.
For folks who are just getting started with this — maybe they’re shopping around, trying to decide how threat intelligence is going to fit into their operations — what sort of advice do you have?
First thing is, the folks that are in your SOC should not be tasked with doing threat intelligence on the side. There are some folks that I have met who have the skills and capabilities of being both a very good SOC analyst and a good intel analyst. You really don’t want to put the stress of both of those onto that analyst, though. One, it’s going to drive them a little bit nuts, and two, if they’re doing threat analysis, that means that they’re not doing what their primary job is, which is monitoring things on your network and making sure that the front lines are independent.
If you want to get into having threat analysis and providing intelligence within your company, make sure that you’ve got a couple of dedicated resources for that, or that you’re piping that information in from somewhere else. Then, what are some of the first couple of products you might want to produce? The first couple of ones that will help show the value of intelligence within your company are going to be things like weekly summary reports or monthly reports. Things that you can share around within your company, either to other department heads or upper management.
Showing them, “Here are some of the major events that we think are applicable to our company. Here’s some analysis of them over the next week.” Here at Recorded Future, we answer that one with the weekly threat landscape product that we have produced, but we see that a lot of our customers are doing things that are very similar to that. It is a time-intensive product, but it’s one of the ones that probably produces the most dividends as far as I’ve seen, as far as intelligence products go. Other things you should probably look at using your newly formed intel team for are going to be quarterly or monthly assessments, as well.
Looking not just at your company and what its threat landscape looks like, but additionally, what the threat landscape looks like for your entire industry. The idea for intelligence, in a sense, is to be looking in a predictive and a forward-looking mentality. You want to be looking at your peers and seeing what’s affecting them because that’s the type of stuff that might come and start targeting you next. That preparatory angle is what you’re looking for. Really, at the end of the day, the goal of intelligence is to be actionable. The actions that you want to take at the end of the day should help you reduce risk within your company.
As I’m sure most CISOs have realized, it’s impossible to stop every single event from ever happening. Really, what your job is at the end of the day, is to reduce the risk of those events happening. When you’re tasking your team with writing intelligence pieces, try to guide them toward writing pieces that help you reduce risk in one way or another, or to have action items that will help you reduce risk. That’s probably one of the best ways to get that type of
stuff started within your company or within your group.
Our thanks to Recorded Future’s Storm Swendsboe for joining us.
If you enjoyed this podcast, we hope you’ll take the time to rate it and leave a review on iTunes. It really does help people find the show.
Don’t forget to sign up for the Recorded Future Cyber Daily email, where everyday you’ll receive the top results for trending technical indicators that are crossing the webs, cyber news, targeted industries, threat actors, exploited vulnerabilities, malware, suspicious IP addresses, and much more. You can find that recordedfuture.com/intel.
We hope you’ve enjoyed the show and that you’ll subscribe and help spread the word among your colleagues and online. The Recorded Future podcast team includes Coordinating Producer Amanda McKeon, Executive Producer Greg Barrette. The show is produced by Pratt Street Media, with Editor John Petrik, Executive Producer Peter Kilpe, and I’m Dave Bittner.
Thanks for listening.