You Can’t Do Everything: The Importance of Prioritization in Security
On the face of things, the goal of security seems simple: To protect an organization from any and all cyberattacks.
Patch every vulnerability. Respond instantly to every security incident. Invest in every available technology. Have enough skilled staff on every team.
This isn’t possible, of course. There isn’t a single organization on the planet that has the resources to completely defend against every possible type of cyberattack.
In fact, not only is it not possible to be 100 percent secure, it’s not even a helpful target to aim for.
Too Complicated, Too Expensive, Too Time Consuming
Let’s be honest — before the widespread adoption of the internet, security was a fairly simple affair. So long as you checked ID badges at reception, locked up at the end of the day, and shredded documents before throwing them in the trash, everything pretty much worked out.
Even if someone did manage to break in and steal something, you’d have insurance to cover the damages.
But we’re well past that era of simplicity now. It doesn’t matter which country or industry you’re in, or how large (or small) your organization is, there are thousands of threat actors out there who see you as just another payday. Even worse, their attack options are many, varied, and growing every day.
The idea you could ever be 100 percent protected against every possible attack vector is, quite simply, delusional. And, as is usually the case, if you try to do everything, you’re likely to achieve almost nothing.
Think about it.
If you run a vulnerability scan and it returns thousands of results, can you realistically patch all of them? Doubtful. Chances are patches aren’t even available for all of your vulnerabilities, and even if they were, the impact on business continuity would be untenable.
And what about security technologies? Can you attend a security conference and decide to invest in everything that looks (or even is) useful? Clearly not, and even if you could, you’d likely never have the time or resources to properly integrate everything with your existing security infrastructure.
At this point, you’re probably thinking, “_Sure, tell me something I don’t know. When do we ever get to do everything we want?_”
Good point. And it leads us nicely to our next dilemma ...
Getting Sidetracked by Things That Seem Important (but Aren’t)
Ever heard the phrase, “_What gets measured gets managed?_” It’s usually attributed to Peter Drucker, and refers to the tendency in business to set metrics for success, and then focus more on achieving those metrics than on why the metrics were set in the first place.
Well, here’s our updated version for the 21st century: What gets attention gets money thrown at it.
Can you see where this is headed? In situations where it’s impossible to do everything, prioritization becomes essential. And in a world of hype-fueled media and sensationalist headlines, our attention is routinely hijacked by things that seem important but actually aren’t.
For example, malware variants such as WannaCry, Petya, and NotPetya garnered a huge amount of media coverage last year. In turn, when asked what they considered to be the greatest threat to their organization’s success, 62 percent of CEOs named malware.
Unfortunately, the fact that 62 percent of CEOs think malware poses the greatest threat to their organization has nothing at all to do with what actually poses the greatest threat. In reality, social engineering attacks (e.g., phishing) are a far more common cause of data breaches than malware, but could easily be overlooked because they grab fewer headlines.
And it’s not just the media that causes confusion. The security industry as a whole is full of buzzwords and “flavor of the month” technologies, some of which stick around and some of which don’t. Even for security experts, it’s often difficult to know how limited budgets should be allocated, and which technologies will be worthwhile in the long term.
To continue with our malware example, it would be easy for a business or security leader to attend a security conference and come away thinking they need to immediately invest in a next-generation firewall (NGFW). After all, media headlines have been full of malware attacks, and NGFWs are designed to prevent them.
But is an NGFW really the best use of their resources? It depends on their organization’s existing security infrastructure, the characteristics of the organization itself, what industry it’s in, and so on.
Want to know what it doesn’t depend on? Whether or not there are lots of malware attacks in the media headlines, or discussions about NGFWs on popular security blogs.
Making Risk-Based Decisions
Naturally, whenever an investment decision needs to be made, the majority of business and security leaders want to do so based on risk.
Typically, CISOs will use the information they have available to them to determine which threats pose the greatest risk to their organization, and make their investment decisions accordingly. Likewise, at the operational level, incident response and vulnerability management teams will use the information available to them to decide how best to allocate their time and resources.
This need to prioritize is precisely what makes overzealous media attention and industry buzzwords so dangerous. They increase the perceived risk of a threat, which in turn distorts the decision-making process.
But here’s the real problem: Most organizations only have internal data (plus media headlines and industry buzz) to work from. For instance, they might ask:
- “_How have we been attacked in the past?_”
- “_Where do our vulnerabilities lie?_”
- “_What are our competitors doing?_”
- “_Which threats do we hear about most regularly?_”
Unfortunately, there’s a problem with this approach. In a field evolving as quickly as cybersecurity, the past doesn’t predict the future.
Sure, some attack vectors are consistently popular. But even within these vectors the specific tactics, techniques, and procedures (TTPs) are constantly evolving.
Put simply, internal data, while valuable, is an incomplete source of information for decision making.
The Missing Ingredient: Context
So how can you make informed, risk-based decisions across the security function? Simple: By putting internal data into the context of your wider threat landscape.
Threat intelligence will help you identify the specific threats facing your organization, so you can prioritize your time and resources accordingly. And it doesn’t matter where in the security function you are. You could be:
- A CISO looking to make powerful investment decisions.
- A vulnerability management specialist trying to determine which vulnerabilities are most significant.
- An incident response analyst in need of a way to prioritize alerts.
- A security operations manager planning how the team should allocate its time.
Quite simply, anytime you’re forced to make a decision between two actions (which is pretty much all the time) you must understand the significance of each possible option. By combining your internal data with a broad range of external sources in real time, threat intelligence provides you with the insights you need to make these decisions.
It doesn’t even matter where your security function currently stands. Your budget, number of personnel, and current security infrastructure are almost irrelevant. Whatever your position, you need a way to allocate your security resources based on actual cyber risk.
And the only way to do that is to have an accurate, real-time grasp of precisely which threats are the most pressing for your specific organization.
To find out how you can start to integrate threat intelligence right now, no matter what your current security infrastructure looks like, read our new white paper, “5 Reasons to Integrate Threat Intelligence Into Your Security Right Now.”