The Problem of Why: Threat-Informed Prioritization in Security Operations.
What does it mean to be threat-informed when it comes to Cyber Defence?
It is one of those classic tough questions that don't have simple answers (at least not ones that are immediately obvious). The great Anton Chuvakin circled back to this topic recently. In this article, he asks an excellent question that goes to the heart of the problem:
"...why does everybody seem to support threat-centric security conceptually, but few practice it operationally?"
Operationalizing a threat-centric approach is not a simple undertaking. You must choose between strategic stances for threat intelligence data collection, information assessment, filtering, enrichment and triage.
You may be tempted to assume that the problem of threat-informed or threat-driven cybersecurity is a threat intelligence one, however, at its core, it is a problem of information significance: the dimensions of data provenance, relevance, interoperability, reliability, actionability and timeliness. What does a particular data cluster mean within the context of your organization and how does it inform actionable outcomes?
Ultimately, what we want is for information to be actionable, our threat intelligence pipeline should help improve the actionability gradients of threat-related data that our environment emits, so it can drive security control deployments like detections, mitigations, hardening, etc.
However, the reality we face in most organizations is far from a meaningful information processing pipeline. Most CyberSecOps models out there resemble Rube Goldberg Machines instead of meaningfully articulated data networks. It suffices to ask some of these questions to your hunting, response, SOC, detection engineering or threat intelligence teams to surface the struggles in providing insight as to what constitutes meaningful threat-driven decisions:
What helps drive the priority of your threat detection, hunting and intelligence collection endeavours?
What is your understanding of the purpose of collecting and processing information about threats that may impact your environment?
Why have you chosen risk "A" over risk "B" to be prioritized for action?
How do you determine the relevancy of a threat to your organization?
Do you simply leverage unidimensional criteria like playing MITRE ATT&CK bingo to decide where to best allocate your hunting and detection efforts?
When it comes to building a strategic approach for the allocation of resources to threat hunting and detection engineering efforts, there is no single "formula" that can define what is the optimal prioritization model. This, however, does not mean you are spared the need to quest (and question) around what constitutes meaningful progress for your organization. The perils of not doing so are falling into The Inevitable Kraken of Doom, as Dr. Jason Fox puts it:
... we collectively maintain a rich delusion of progress, busily working away, like automaton-golems, towards that what I call ‘The Inevitable Kraken of Doom’—an Eldritch beast that feeds upon the sweet nectar of our impending irrelevance.
Despite all challenges in navigating the complexities of threat-driven CyberOps, we seem to succeed in what I can only describe as perfomant ambiguity, an ability to operate coherently in situations where there is a high degree of uncertainty and complexity. Why is this? What do we intuitively know about threat-driven strategies that we haven't yet elevated to formal models?
In this article we will explore this topic and hopefully bring insight into the problem space.
Threat Actionability in CyberOps
There is a bigger question at play here which will send us on an interesting quest (have you now noticed how "quest" and "question" are related?). And that question is: what even is actionability?
Furthermore, wouldn't it be a derived score? If so, how do we arrive at threat actionability scores?
MITRE defines actionability in its "Top ATT&CK Techniques" Project as:
The opportunity for a defender to detect or mitigate against each ATT&CK technique based on publicly available analytics and security controls.
The source of the data required to score detection availability is obtained from publicly available detection repositories (MITRE’s Cyber Analytic Repository, Elastic, Sigma HQ's rules, and Splunk Detections) whilst mitigation availability is sourced from CIS Critical Security Controls and NIST 800-53 Security Controls..
MITRE's definition for the "Top ATT&CK Techniques" project is not lacking in regards to their use case (scoring the top 10 techniques based on actionability, choke-point and prevalence), but when applied to a specific organization, it becomes tricky to translate this to the operational reality of detection and mitigation capabilities.
For instance, you may be a small or medium business that has low maturity in the "Network Monitoring and Defense" control which means you may not have an EDR or have one that does not provide the level of granularity required to build effective detections for a big range of TTPs. This may result in your organization having low actionability scores for process injection techniques, despite a high availability of such analytics in the public realm.
In large and mature organizations, the problem is of magnitude and complexity. You may have excellent maturity in all your security controls, but the organization is so dynamic, complex and ever-changing, that it is not possible to have a complete understanding of the full functionality and reach of all the security controls that make up your defence-in-depth layers.
This is why the "Top ATT&CK Techniques" project provides a calculator to derive the 10 most relevant techniques to hunt, detect and mitigate against based on the particularities of your environment.
Despite this, the definition of actionability that we will use for this article series is a bit more nuanced. We will define it as:
The degree to which threat information enables the articulation of organizational resources to allow for the efficient mitigation, detection and response of cyber threats.
When you ask how actionable your threat intelligence is, you are asking about the ability of an organization to articulate decision-making processes based on available information to direct the actions required for mitigating risk exposure to cyber threats.
(yeah I know, any of the above two paragraphs could have been used for a definition, but I had to pick... in the end, I didn't though, and just stated both in a sequence... the puzzle will solve itself though in future iterations, or perhaps just won't 🤨)
Through the remainder of this article series, you will notice that the notion of "actionability of threat information" works as an attractor. Armed with this torch, let's venture into the deeper levels of this cave to shine a light on darker corners.
The Deceiving Funnel of Threat Actionability
Threat intelligence is essentially pragmatic. It aims to communicate situational awareness to facilitate the decision/action phases of our OODA loops. The key here is to understand "communication" since not every utterance or cluster of words in a report communicates information in a way that will produce adequate action chains.
In linguistics, the ability to understand another speaker's intended meaning is called pragmatic competence. For Cyber Defence and Threat Intel, pragmatic competence is the ability to deliver information in such a way that it triggers decision-making processes that improve system resiliency through continuous control optimization and validation.
It's important to consider the most effective ways to transform information into informed decisions that guide specific actions. This is particularly relevant for cyber defence strategies where information needs to be presented in a way that minimizes obstacles and enables downstream processes to easily utilize it to guide their activities. As such, it's crucial to ask yourself: how should we encode this information to reduce friction and facilitate meaningful progress?
Pragmatic competence is not possible without an essential quality dimension of threat information called interoperability:
The degree to which the formats of threat data or intelligence is compatible with consumers’ internal systems allowing it to be accessed and integrated seamlessly. (Threat Intelligence Quality Dimensions for Research and Practice)
This compatibility in the format of threat data does not merely pertain to technology (digital systems) but also to people and processes. A cyber function that is not linked to the value chain of threat-informed defence risks not focusing on the threats that matter to the organization. Focusing means amplifying the ability to anticipate those threats.
Ultimately, we are seeking to anticipate threats by producing information about them that is actionable. But what are the types of actions that cyber defence teams usually take in the face of cyber threats?
Generally speaking, the range of actions we can enact regarding threats can be reduced to the threat actionability funnel: monitor -> analyse -> implement
Let's represent this with a diėgram* that shows a simplified picture of what we do with cyber threats.
* Diego-Diagram, an illogically logical way of representing something so that it makes partial sense, which despite its shortcomings, still carries enough semantic force to encode the phenomenon into a representational concept that can inspire new -and better- patterns of thinking. It is also just a drawing that many people will find hilarious and possibly informative -on a good day-
Generally speaking, there are:
Those threats that you should monitor and be aware of, either because they relate to your industry vertical, geographical location, product or services.
Out of the threats you've decided to monitor, be that via 3rd party threat intelligence feeds, in-house threat intel capability or both, there's only a small subset that you will actively analyse, i.e. allocate finite human brain-computing power to their analysis.
The threats that you have actively analysed will produce a variety of outputs, sometimes you will have a detailed breakdown of attacker TTPs, and sometimes you will have high-level information that is not very actionable; the threat information you have converted into actionable knowledge can be implemented practically by deploying it to your security controls.
The term "security control" used in point 3 refers to manual or automated controls of any nature, ranging from blocking an IP in your Firewall to passing behavioural information to your Hunt team for retro-hunts or tightening physical controls like access to your premises. The spectrum of security controls encompasses procedural ones like a risk assessment to technical ones like adding an IOC to a blacklist.
You have probably identified two things in the diagram above, first of all, it can be loosely coupled with the DAIKI model of AIMOD2. Secondly, it is loosely aligned to the classic Threat Intelligence Lifecycle. In fact, we could overlay the TI Lifecycle and obtain something like this:
It seems like a nice picture, right? There are threats out there in cyberland going about their day, attempting to find victims and fulfil their destiny as "the cost of doing business", a sort of tax for using the internet.
Nevertheless, we don't really care about all threats, don't we? We swiftly strain threats through a nicely-shaped funnel that gets rid of irrelevant impurities, leaving us with the pristine material of truly important information.
The problem with the above diagram is that it makes us believe that the threats we have NOT analysed or actioned simply disappear. This is a deceptive representation of how threats interact with our digital perimeter and attack surface.
The reality is that ignored threats remain, lingering in forgotten tunnels of the business, where information decay slowly turns our knowledge into dust (and therefore unknown-known risks).
We have simply decided, through a triage process, that they are not worth our attention because they are either not applicable to our digital landscape, or because we wrongly classified them as irrelevant (when they were actually very much relevant). It may come as a surprise to you, but there is such a thing as False Negatives in the threat intelligence world.
Actionability Zones in Threat-Informed Defence
Our businessy and practical minds crave funnels, because they are easier to parse for our thinking System 2. The more noise we can discard quickly the better. System 2 is lazy and it's governed by the principle of least effort.
Don't get me wrong, I like funnel representations of reality dynamics as much as the next guy 🍺, but because there is a subset of ignored threats that potentially fall in the false-negative bag, it is better to talk in terms of "zones" rather than funnels when it comes to threats. This is how a diėgram would represent it:
Actionability zones indicate how deserving a cluster of threats is, of your finite human-resourced computing time.
Zones are more ambiguous than funnels. Because of this, they also capture higher degrees of complexity. Zones can represent areas or regions within a larger space, boundaries between these zones can sometimes be less evident. Zones can also overlap, threshold spaces open up between boundaries, where perimeters create an interplay betwixt the "inside" and the "outside".
Actionability Zones remove pressure from the threat intelligence collection phase since the collection is meant to be broad but tailored, directed but curious, targeted to your industry vertical, geopolitical dynamics, services, or strategic goals, but broad enough that you won't make premature determinations regarding its usefulness, risking to exclude that which could be very important to you in the next phase (analysis).
More actionable zones still have the least actionable zones as an extended perimeter. There are still threats surrounding our implementation zone that deserve further analysis and threats in our analysis zone that deserve heightened monitoring, we might not do something about them now, but might do so in the near future.
The digital infrastructure of many businesses out there can be very complex, hybrid architecture, multi-cloud environments, multiple operative systems and OS versions, dispersed workforce, changing requirements, etc. turn these infrastructures into highly volatile environments (environmental volatility is one of the three tactical disadvantages of cyber defenders, something I will address in future epistles).
During your analysis phase, you may have discarded information regarding specific vulnerabilities or adversarial techniques thinking they don't apply to your digital systems, only to find out a couple of days later that someone in the organization has just deployed the very system that is targeted by those attacks. Threats coexist, just like zones.
But how should we classify threats in terms of the evidence we have about them? Not all threats are known to the same degree, some remain mostly obscure to organizational reasoning, some are anticipated but not much is known about them, and some have prolific open-source information making them highly transparent to threat intelligence functions out there.
It is important to understand these differences since our degrees of confidence are rationally constrained by our evidence. In turn, this guides our allocation of resources: it is extremely rare for an organization to allocate high quantities of manpower to deploy security controls for threats that are vaguely understood.
Based on the availability of information regarding threats, we could categorize them into three epistemic states of information:
Hypothetical threats | Actionability factor: Low | Recommended action: Monitor.
These are threats that could potentially impact your organization. Their actionability is low mainly because you have not yet fully qualified them to understand whether they are applicable to your attack surface or not. This is also the reason why they mostly live in the future horizon. When I say hypothetical here, it means you really don't know whether they could or could not impact your business. You may not even know these threats are possible yet (like asking someone in the 1940s if we need quantum-resistant cryptography or if LLMs can help generate self-sustaining polymorphic malware).Presumptive threats | Actionability factor: Medium | Recommended action: Analyse.
These are threats you have decided to focus on, you have allocated resources to understand how they impact your attack surface. If you want to think about this in terms of a standard diamond model, these are threats that carry the potential to damage your business because there is an adversary who has a capability that can impact a vulnerability in your infrastructure.Factual threats | Actionability factor: high | Recommended action: Implement.
These are the threats that you have effectively brought under governance. You have aligned or updated your security controls to manage the threat to the best of your organizational capability.
Now please bear in mind, I have not yet even spoken about how those threats were classified and prioritized by your business.
We described actionability zones as fuzzy perimeters that indicate what are the best courses of action for different classes of threats, but how are threats selected and optimally allocated to their corresponding zones so that we are addressing those threats that have contextual meaning within our business?
To explore this problem we will have to talk about how threat information relates to entropy and uncertainty about the cyber threats out there.
We will address this in the next articles in this series, stay tuned!