Threat Analyst Insights: My 5 Least Favorite Words in Threat Intelligence
By David Carver on January 2, 2019
Editor’s Note: The following blog post is a summary of a presentation from RFUN 2018 featuring David Carver, team lead for subscription services at Recorded Future.
I don’t know why some people dislike the word “moist.” It has a reputation for consistently upsetting people, but as far as I can tell, it’s a useful word to describe a specific feeling.
On the other hand, I very much dislike words that crop up in threat intelligence writing that are not useful and describe nothing — words like “may,” “could,” “significant,” “appear,” and “seem.” They are used so frequently (by myself included, admittedly) that they have become my least favorite words in threat intelligence.
I hope this brief overview convinces you to look out for them as a reader (or even use them less as a writer).
‘May’ and ‘Could’
Topping the list has to be my least favorite pair of modals — the twin hesitators “may” and “could.” Nothing sucks the air out of a piece of research like the final assessment that a new attack vector, threat group, or vulnerability “could” severely impact an organization. Whether or not the statement is true, it’s not something a business can rely on to inform patch management, a threat intelligence budget, or an overseas acquisition.
When I was first learning threat intelligence, I heard people casually refer to the percentage associated with “may” and “could” as 50 percent; for example, if you said something “might” happen, then it might just as well not happen. This is inaccurate — “may” and “could” can refer to any percentage chance between 0 and 100. Due to this huge and unhelpful range, I prefer to avoid “may” and carefully use “could,” as explained below.
“May” relates to chance. Using this word is one step removed from the true analytic work of establishing what the chances are of an event occurring. When I see a statement like “this type of lure may be used more as criminals become aware of it,” my immediate request is for the author to rewrite the statement with some assessment of likelihood (for example, very likely or very unlikely). Removing “may” from one’s vocabulary forces a more exact understanding of one’s sources and assumptions.
“Could” relates to ability, so a good way to use this word in intelligence is to think of factors that allow or generate a result, like a switch that flips between completely on or completely off. Imagine a world in which existing versions of Adobe Flash Player have zero vulnerabilities (if only), but a newly released version has several vulnerabilities. An analyst would be correct in saying, “Users of Flash Player can now be compromised via crafted emails that exploit these vulnerabilities.”
Anyone who played “The Legend of Zelda: Ocarina of Time” as a kid will remember Navi, the friendly but annoying fairy who interrupted the game every five minutes to shout, “Hey! Listen!” with presumably important information. Navi never told you why it was important until you’d already stopped what you were doing and read through several chunks of text. “Significant” is like that — it’s a word that tells the reader something is important, but leaves it up to the reader to figure out why, if it explains at all.
For instance, the sentence “We have seen a significant rise in Magecart infections in the past few months” only tells the reader that the rise was large enough to merit attention, but doesn’t give more detailed data, like whether the rise was sudden or gradual, expected or surprising, up 25 percent or up 125 percent, and so on. These are the sorts of details that matter in assessing the severity of a threat or the risk associated with not preparing for it.
Words like “significant” tend to move around in packs of similarly vague descriptors (“notable,” “interesting,” “sizeable”), contributing little helpful data but giving the impression of expertise. I try to avoid these words as much as possible because they are, most of the time, meaningless (which is sad for “significant,” defined as “having meaning”).
‘Appear’ and ‘Seem’
In many cases, a lack of exactitude in threat intelligence writing is a sign of ignorance of the topic (in which case, more research or training is necessary), but it also sometimes shows that a writer is mistaking observation for analysis. Take a sentence like, “Recent activity from APT29 appears to show a renewed interest in targeting public sector entities.” If this is where the writer stops, then they have only communicated others’ findings when they ought to have judged the trustworthiness of the source and the soundness of the argument.
“Appear” and “seem” are dangerous words for analysts to play with because they help turn off one of the best items in their toolkit: suspicion. Whether something seems like a threat is of less concern to an analyst than whether it is. A new vulnerability might appear to be of major concern, but the steps required to exploit it turn out to be prohibitively complex. An APT attack against a major sporting event (for example, Olympic Destroyer) might appear to originate from one country, but in fact comes from another.
An analyst’s job is to push past mere appearances to correctly categorize reality and impact. There is, of course, nothing wrong with making observations in analytical writing, but those observations should lead naturally to well-considered conclusions instead of standing in for them.
I should point out that there are proper uses of all of these words and that in other contexts, they are meaningful and specific. Over the years, however, I’ve mainly seen them written in the context of a variety of flaws: a misunderstanding of analysis methodology, a poor grasp of cybersecurity, a reluctance to stake one’s reputation to a quantity that a customer can criticize later, and several others.
Like good writing, in general, good threat intelligence relies on knowing what a thing actually means (a new type of attack, a group’s origin, or a word) and assessing it accordingly. If it appears that the words above could significantly hurt the value of threat intelligence for your organization … Actually, let’s rewrite that. The words above are usually unhelpful. Avoid them.