Blog

Analyzing the Patch Timeline for Zero-Day Exploits

Posted: 18th February 2014
By: LEE KAGAN
Analyzing the Patch Timeline for Zero-Day Exploits

It is a reality today, and has been for some time now, the new and perhaps most critical battlefield is cyberspace. Everything is connected either online or internally. Our sensitive information is stored digitally, hence the ever increasing rise in cyber security spending, training, recruiting, and attacking.

Whether it be the lone hacker against another entity he/she is opposing or nation states engaging in cyberwarfare with each other, there are countless and continuous means for breaches to occur. Sometimes, it is not about who is the most sophisticated but rather the attacker discovering “low hanging fruit” or some Monday error, misconfiguration that allowed access to a system. However, the holy grail, if you will, are something we refer to as zero-day exploits.

To make an analogy to physical medicine, we as humans have overcome some horrible and devastating diseases and infections usually by the assistance of medical researchers and doctors. So let’s assume a new infection hits us in the wild. It’s safe to say there’s a small to non-existent chance all those meds in your cabinet are going to help. It’s also safe to say the early recipients of the infections may be beyond saving.

The reason is, if the infection is something never seen before, how would we know to protect against it or have the pre-emptive means to do so? In the digital realm, the zero-day attack is a lot like this. A vulnerability is discovered that no prior defense exists for, an exploit is developed to attack that vulnerability and then… game over.

In this analysis, we are going to take a look at a topic I consider to be even more critical than the existence of the zero-day attack: the patch to fix the problems.

To take it even further, and perhaps to some of your surprise, the very common and (by my standards) unacceptable length of time some of the largest vendors in the business take to issue a fix for severe flaws in their products. This is where the real danger exists for us as customers.

Java

Anyone in development or security for that matter and everything in between is aware of Java (Oracle Corp.) and it’s non-stop, perpetual causes of major exploits. There’s two major issues with this product:

1. Java is everywhere. Many programs use it, web browsers, web applications, and so on.

2. Java is (ironically) a sandboxed environment. This means it runs in a self-contained manner

However, the mechanisms to which this sandbox (or Security Manager) operates allows a lengthy list of avenues for attackers to exploit it. If we look at the analysis of Java exploits, well, I’ll let you read each entity for yourself, but to sum it up for you, I rarely see Oracle news come out that says something like, “Hey, we got an awesome new…” Instead it usually reads, “Hey, we’re really sorry about that last hiccup that exposed all of you to remote code execution, so here’s an update that should last you a week or so…”

Herein lies one of the interesting aspects of our topic today. Not the length between the zero-day and the patch, but the consistent exploitation and the appearance of a lack to go to extreme lengths to mitigate concurrent exploitation.

java-exploits-timeline.png

Now, I should mention when it comes to Java the issue is not solely on the shoulders of Oracle but also how developers implement Java in their applications, and the fact Java is so largely used which makes it a high-valued target for cyber attacks. Here’s a good quote from HD Moore, CSO of Rapid7 (Metasploit Framework):

It could take two years for Oracle to fix all the security flaws in the version of Java used to surf the web; that timeframe doesn’t count any additional Java exploits discovered in the future. The safest thing to do at this point is just assume Java is always going to be vulnerable.

Let’s shift our view to the “middle” of this topic. Why it is so persistent?

I’m a believer (as we will see in other examples) that vendors, due to the fact they are a businesses, care about business. Odd, right?

Well, my father taught me your customers are what matter, no matter what. Please the people, be great at what you do, and business will go on. Unfortunately, some vendors are concerned more with keeping a lid on anything that may hurt business, despite the impact it has on their clients.

Oracle, however, has the luxury, like many, of being so large its failures to protect clients does not really hinder business. A serious flaw in our society.

Publish to Patch Dates

Another reality of this industry is the timeless game of cat and mouse. Hackers and defenders are constantly trying to one-up each other. In my opinion, the attackers have the advantage. Reason I say so is the time the defenders spend defending or working on a solution for something the attackers just launched, the attackers are spending time on a new attack.

The amount of products that have been exploited is so massive that it comes very naturally with the turf for huge companies: Microsoft, Apple, Linux, PHP, Apache, IIS, so on and so forth.

(That list is probably 0.2% of them all).

The following analysis from Recorded Future show some of the most long-running published exploits and their longevity in the wild. I also want to mention, that depending on who you believe, there’s an average of about 10 months (historically speaking) until a specific vulnerability is eradicated.

PHP

According to OSVDB, PHP (Hypertext Preprocessor, server-side scripting language) has a large list (and does continue to grow) that was once published back in 2005. They reported by July 2010 no mail was responded to by responsible individuals asking for the details of the reported vulnerabilities. OSVDB listed the DoE (Days of Exposure) as being 12,873.

php-vulnerabilities-timeline.png

Firefox

Back in 2009, a flaw in Mozilla’s Firefox browser was discovered that allowed XSS (Cross Site Scripting). The DoE was listed at 1,140 days.

firefox-cross-site-scripting-timeline.png

Those are just some very extreme examples, but nonetheless such extended timeframes do exist. In the shorter side of things, yet still deemed unacceptable by the IT community, take the incident of MS10-002. This particular update from Microsoft was labelled a “critical” update. For those of you who remember it, this was the exploit in Internet Explorer that led to the breach of Google. Microsoft even acknowledged the bugs discovery when a security firm out of Israel reported it.

Responsible Disclosure: To Be or Not to Be… Sued

One of the larger issues facing security researchers is the matter of how to disclose their information. What I’m referring to is there are individuals out there who dedicate their time to finding bugs, flaws, vulnerabilities, and exploits to products that affect many people and business across the globe.

You would think this a moral, dare I say even heroic thing to do. Problem is, not every business or vendor of a product who has a flaw discovered in their product is equally moral when it comes to dealing with it. There have been numerous reports of researchers actually facing legal action for disclosing a bug publicly before informing the vendor. There have been incidents where the researcher did disclose the bug privately to the vendor and faced legal action. There have been incidents where the researcher disclosed the flaw privately, but did not hear a response back in what he/she deemed an acceptable time frame and felt it necessary to warn the public of the risk they face because vendors did not react, and yes you guessed it… faced legal action.

Sometimes vendors even acknowledge the flaw and simply do nothing. Perhaps it is because they fear public faith in their product will drop, meaning less money, so from the standpoint of business equals profit (no matter what), let the customer suffer so long as they continue to buy the product.

As a penetration tester myself (for the uninformed, that’s a nice industry way of saying hacker) I see a lot of scenarios where custom apps and usage of vulnerable software are in production environments all the time. It is also not rare (at all) to see a mere “fix” for the lone bug as opposed to investigating what else could be lurking in the shadows of their own code. The SDLC, or Software Development Life Cycle is an (ahem) “idea” that I was under the impression people took seriously. This idea is to develop your code, test your code, debug your code, fix your code, rinse and repeat.

Sadly, we see an astronomical level of “just get it boxed and out on the shelves” attitude. Similar to the “shoot first, ask questions later” mentality. A more passionate approach to software development could single handedly decrease the number of zero-day threats.

Well, I wish I had a more positive closing statement to leave you with, but while this train of thought exists, the individuals who are going to get “shot,” are we the end users, and those who have the questions asked at them as to why we got shot, will simply say, “I can’t answer that… but here’s a fix for the meantime.”

Related