Blog

#ColumbianChemicals Hoax: Trolling the Gulf Coast for Deceptive Patterns

Posted: 12th June 2015
By: MATT KODAMA
#ColumbianChemicals Hoax: Trolling the Gulf Coast for Deceptive Patterns

Analysis Summary

  • Recorded Future analyzed a politically-motivated online hoax, #ColumbianChemicals. Our goal was to find communication patterns which reliably indicate hoaxes. Analysts can triage future incidents by assessing these communication patterns.
  • We evaluated three patterns: 1. A vocal minority that disproportionately drives the discussion. 2. A target audience focus which is appropriate to the type of incident. 3. An audience reaction of increased social media interest in some authors.
  • We found that Web intelligence data generated by this hoax exhibits atypical patterns, and propose that these atypical patterns are durable. This insight provides analysts a method to rapidly characterize whether a novel incident is a hoax.

Reporting by Adrian Chen in The New York Times Magazine has shone a fascinating light on “information operations” conducted on the open Web. In usenet days we called it “trolling.” Flash forward thirty years and politically motivated trolling is a full time day job.

The impetus for Adrian Chen’s investigation is a specific hoax, perpetrated on September 11, 2014. The hoax involved a fictitious man-made disaster which did not actually occur in St. Mary Parish, Louisiana, and was not related to ISIS. Chen’s investigation has linked this hoax to a group called the Internet Research Agency. Max Seddon has also reported on the politically-motivated activities of this group, related to events in the Ukraine and Ferguson.

Here’s how reporting of the hoax appears in Recorded Future:

Click image for larger view

This timeline shows September 11 and 12, 2014 in GMT time zone, which is the daylight hours in US Eastern time. During this period, Recorded Future’s real-time threat intelligence analysis captured nearly five thousand reported events related to #ColumbianChemicals, from nearly a thousand distinct authors.

At a cursory glance, this looks like a bona fide disaster report. Later investigations of #ColumbianChemicals and the Internet Research Agency have confirmed that this was a hoax, executed with hard work through online channels – what intel professionals refer to as an information operation.

From Recorded Future’s perspective – structuring Web intelligence information for threat intelligence – these intriguing investigations beg some questions: Can we apply automation to make this work more efficient? Does forensic analysis of the Web intel data from this incident reveal patterns that can rapidly characterize future incidents as “more likely to be legit” or “more likely to be a hoax,” as a springboard for conclusive review by an analyst? We identify and assess three candidate patterns.

1. Those few loud voices… where are they?

If this incident were legit, we’d expect to see a few voices that are significantly louder than all the rest. These could be a few people who were extremely upset about the incident, or a few people were somewhat upset but are especially active online. That’s what normal looks like.

But in this case, we don’t see this pattern. Instead we see suspiciously smooth patterns in the data. It doesn’t look lumpy like real data – it looks overly produced.

Here is a contrasting example of a normal reporting pattern. When the #Sandworm vulnerability was first disclosed online in October 2014, Recorded Future collected nearly 1,400 reported events from nearly 1,200 different authors. Almost all of those authors tweeted once then moved on with their day.

The tiny bumps in the curve above to the right of “four reports per author” are the top eight authors. They collectively drove about 5% of the entire conversation. When we understand that @securityaffairs, @timwoodsdesign, @symantec, @PhysicalDrive0, @argevise, @jamestaliento, @patchguard, and @bartblaze are the “vocal minority” on this topic, we can immediately have higher confidence that this is no hoax.

Should this pattern be durably useful? Yes. It’s vexingly hard to rapidly engineer realistic data. Ask any engineer who has QA’ed with “test lab data” and then been appalled by results in the wild. “Social” engineering is no different.

And let’s suppose that a hoax perpetrator creates the perfect illusion of realism. This perversely helps incident investigators by shortlisting a few critical false personas that will more rapidly reveal the hoax. Investing those tactics gives the hoax perpetrator diminishing or even negative returns.

2. Did anybody call the police?

The vast majority of the posts on this hashtag are clearly directed at specific Twitter profiles. Here is a representative example:

There are three odd characteristics of this communication pattern.

First, this online discussion doesn’t converge on any individual or group as the target audience. Natural candidates include people who must deal with the incident or the people who should be held responsible. But in this discussion there are no significant “hotspots” among the audience targets.

[table type=”striped_bordered”]
[trow]
[thcol]Posts Directed at Profile[/thcol]
[thcol]Profiles With This Attention Level[/thcol]
[/trow]
[trow]
[tcol]50 or more[/tcol]
[tcol]0[/tcol]
[/trow]
[trow]
[tcol]41-50[/tcol]
[tcol]7[/tcol]
[/trow]
[trow]
[tcol]31-40[/tcol]
[tcol]31[/tcol]
[/trow]
[trow]
[tcol]21-30[/tcol]
[tcol]43[/tcol]
[/trow]
[trow]
[tcol]11-20[/tcol]
[tcol]84[/tcol]
[/trow]
[trow]
[tcol]1-10[/tcol]
[tcol]102[/tcol]
[/trow]
[/table]

The posts are broadly directed at 267 different profiles. It’s like the people behind these personas fanned out to lightly touch as many people as possible instead of cooperating to get the attention on a few key audiences.

The second oddity is the appropriateness of the audience targets to the reported event. The top audience targets are Brenda Buttner (Fox News business correspondent) and Ron Paul (former congressman and US presidential candidate) and the rest of the top tier are similarly only tenuously appropriate to a terrorist attack in Louisiana.

[table type=”striped_bordered”]
[trow]
[thcol]”Target” Profile Named in Tweets[/thcol]
[thcol]Number of Tweets[/thcol]
[/trow]
[trow]
[tcol]@brendabuttner[/tcol]
[tcol]46[/tcol]
[/trow]
[trow]
[tcol]@ronpaul[/tcol]
[tcol]46[/tcol]
[/trow]
[trow]
[tcol]@abcpolitics[/tcol]
[tcol]45[/tcol]
[/trow]
[trow]
[tcol]@lolpoliticsusa[/tcol]
[tcol]45[/tcol]
[/trow]
[trow]
[tcol]@heritage_action[/tcol]
[tcol]42[/tcol]
[/trow]
[trow]
[tcol]@jimkleinpeter[/tcol]
[tcol]42[/tcol]
[/trow]
[trow]
[tcol]@johnkerry[/tcol]
[tcol]42[/tcol]
[/trow]
[trow]
[tcol]@laurenashburn[/tcol]
[tcol]40[/tcol]
[/trow]
[trow]
[tcol]@adrianeq[/tcol]
[tcol]39[/tcol]
[/trow]
[trow]
[tcol]@jrball35[/tcol]
[tcol]39[/tcol]
[/trow]
[trow]
[tcol]@repkevinyoder[/tcol]
[tcol]38[/tcol]
[/trow]
[trow]
[tcol]@politiclmadness[/tcol]
[tcol]37[/tcol]
[/trow]
[trow]
[tcol]@thedemocrats[/tcol]
[tcol]37[/tcol]
[/trow]
[trow]
[tcol]@pattiannbrown[/tcol]
[tcol]36[/tcol]
[/trow]
[trow]
[tcol]@senbobcorker[/tcol]
[tcol]36[/tcol]
[/trow]
[trow]
[tcol]@repjohnlewis[/tcol]
[tcol]35[/tcol]
[/trow]
[trow]
[tcol]@jerrymoran[/tcol]
[tcol]34[/tcol]
[/trow]
[trow]
[tcol]@politicstbtimes[/tcol]
[tcol]34[/tcol]
[/trow]
[trow]
[tcol]@senjeffmerkley[/tcol]
[tcol]34[/tcol]
[/trow]
[trow]
[tcol]@senscottbrown[/tcol]
[tcol]34[/tcol]
[/trow]
[/table]

The audience targets are national political persons and organizations, both partisan and nonpartisan. This is surprising and inappropriate because the purported event is a terrorist attack (not a purely political event) in a location that is normally covered by regional media rather than national media.

The third oddity is an absence: Some expected communication is missing. There is no tweet aimed at local St Mary Parish authorities like @StMarySO. Louisiana governor @BobbyJindal also doesn’t get a tweet, despite the heavy bipartisan political focus. Or maybe this is not bizarre at all – alerting the directly responsible authorities is a great way to smoke out a hoax!

This second characterization patterns will also be durable. A legit organic discussion should converge on some audience targets and exhibit outliers. These audience targets will be appropriate to the incident, at least from the perspective of the vocal minority. But for a hoax perpetrator, focusing attention on specific audience targets is counterproductive. This attention raises the stakes for that target, and thus increases the odds that the target will look carefully enough to see through the hoax. The perpetrator will likely avoid these actions and accept the cost of exposing a clearly atypical communication pattern.

3. Audience response… where are the followers?

This third and last pattern involves changes in the data temporally, comparing a baseline observed before the incident starts to data at the observation time. It therefore depends on access to a system that can provide that historic data.

Normally we expect that when a person posts something really hot on social media, it attracts the interest of new followers. (You might say that’s the whole point of social media.) But we don’t see that pattern here.

[table type=”striped_bordered”]
[trow]
[thcol]New Followers Acquired[/thcol]
[thcol]Profiles With This Audience Growth[/thcol]
[/trow]
[trow]
[tcol]145[/tcol]
[tcol]1[/tcol]
[/trow]
[trow]
[tcol]10[/tcol]
[tcol]1[/tcol]
[/trow]
[trow]
[tcol]7[/tcol]
[tcol]3[/tcol]
[/trow]
[trow]
[tcol]6[/tcol]
[tcol]2[/tcol]
[/trow]
[trow]
[tcol]5[/tcol]
[tcol]6[/tcol]
[/trow]
[trow]
[tcol]4[/tcol]
[tcol]16[/tcol]
[/trow]
[trow]
[tcol]3[/tcol]
[tcol]28[/tcol]
[/trow]
[trow]
[tcol]2[/tcol]
[tcol]69[/tcol]
[/trow]
[trow]
[tcol]1[/tcol]
[tcol]147[/tcol]
[/trow]
[trow]
[tcol]0[/tcol]
[tcol]603[/tcol]
[/trow]
[/table]

This looks more legit at a first glance – that one person picked up 145 new followers! But on closer inspection of that profile the hoax is immediately lain bare. This one profile with massive growth is tweeting the same content about the incident, does not have significantly larger existing audience before the incident starts, and is not posting to larger or more active target channels. This profile looks much the same as the rest, so its increased audience size is just a further anomaly, not evidence of authenticity.

For the first two patterns, we proposed that abnormal communication patterns will be durable indicators of likely hoaxes, because manufacturing the expected pattern is counterproductive for the hoax perpetrator in the bigger operational picture. Closer inspection of this third pattern directly illustrates the point. Absent any explanation for why this profile should get a dramatically stronger audience response, we’re left only with an alternate hypothesis: This was an already-existing socially engineered persona, which was added to the hoax team’s toolset for this information operation, and so the team promptly “made friends” with their other avatars. Under this hypothesis, a capture of these 145 new followers makes a great jumpstart for a deeper investigation of the perpetrator’s methods.

Conclusion

Through this deep dive into the #ColumbianChemicals hoax, we have identified three expected patterns in online event reporting that distinguish legitimate incidents from hoaxes.

  • The reporting author distribution should exhibit outliers. These authors are the vocal minority driving the online discussion and should be given priority for source credibility assessment.
  • The target audience distribution should also exhibit outliers. These are the action / advocacy targets of the online discussion, and should be given priority in assessing the most likely intentions of authors active in the discussion.
  • The non-target audience should respond to the online discussion, through actions like following / friending specific profiles, and retweeting / liking / favoriting specific messages. Strong non-target audience response highlights opportunities for independent confirmation of the event in offline sources. Lack of non-target audience response suggests that the event cannot be confirmed and is more likely a hoax.

All three pattern assessments can be assisted using automation. The analysis in this blog post was conducted using the Recorded Future dataset of online event reporting. The baseline collection and processing (annotation) of this public event reporting provided the bulk of the time savings. The three patterns can be evaluated in the Recorded Future Web application. For this analysis, we used the Recorded Future API to provide more quantitatively detail.

Are analytic assessments like these part of your threat intelligence work? Please contact us to learn how Recorded Future can aid you in making faster, more accurate assessments.

Related