When a vendor claims they have 100% MITRE ATT&CK® coverage, you should immediately ask: “100% of what, exactly?”
Coverage claims without context are one of the most persistent sources of confusion in security tooling. This post breaks down four myths behind ATT&CK coverage claims and offers a more useful framework for thinking about ATT&CK coverage in practice.
What is MITRE ATT&CK?
ATT&CK is an open, community-driven knowledge base of adversary behaviors observed in the wild. It’s updated twice a year across three matrices (Enterprise, Mobile, and ICS) and documents over 200 techniques and more than 400 sub-techniques. If you’re new to ATT&CK or want to go deeper on the basics, the MITRE ATT&CK website is the right place to start.
ATT&CK coverage refers to how well a security program can detect, prevent, or validate adversary techniques documented in the ATT&CK framework.
However, the concept is often misunderstood. ATT&CK coverage is not simply the percentage of techniques mapped in a tool or heatmap. Meaningful defensive visibility depends on:
- which techniques are relevant to your environment and threat landscape
- the depth and reliability of detections for those techniques
- whether coverage has been validated through testing
- how coverage is maintained as systems and adversary behaviors evolve
For our purposes here, a few things are worth keeping in mind as you read.
The matrix isn’t a left-to-right kill chain. Real attacks loop. Adversaries establish persistence, move laterally, and establish persistence again. Coverage that treats the matrix as a linear checklist is missing the point.
Tactics, techniques, and procedures operate at different levels of abstraction. Tactics describe the adversary’s goal (the why). Techniques describe the method used to achieve that goal (the how). Procedures are specific real-world implementations of a technique: the exact tools, commands, and variations a particular threat actor used in a particular campaign. Procedures are the most granular level, the most likely to change.
Being in ATT&CK doesn’t mean something is inherently malicious. Adversaries frequently use native administrative tools such as WMI, PowerShell, and scheduled tasks because they work and because they’re hard to distinguish from legitimate activity. That dual-use reality is precisely what makes detection hard, and coverage claims easy to inflate.
With that context in place, let’s get into it.
Myth 1: All Techniques Are Created Equal
Not all techniques are created equal — and thinking about coverage without accounting for that will lead you in the wrong direction. It comes down to two things: your terrain and the threats you face.
Your environment shapes what’s relevant. ATT&CK contains techniques that apply to specific technologies. If you’re not running Kubernetes, coverage of container-specific techniques simply doesn’t apply to you. Your industry, geography, and organization type all shape which threat actors are likely to target you and therefore which techniques deserve your focus.
Some techniques are broadly important regardless of environment. There are techniques that show up consistently across threat actor reports year after year. Phishing (T1566) is a reliable example. It consistently ranks at the top of threat intelligence reports, and the associated techniques deserve prioritization for most organizations.
Use case matters too. How you think about technique importance differs depending on whether you’re approaching ATT&CK from a defensive, offensive, or threat intelligence perspective. From a testing standpoint, some techniques are difficult or impractical to test (requiring physical access); some are potentially destructive, some simply don’t make sense to exercise in a live environment. That’s a different lens than pure defensive prioritization, and it affects how you should think about and communicate coverage.
Some techniques function as choke points. Certain techniques are points of convergence that appear across many different attacks. Adversaries across campaigns and toolsets are forced to pass through them to succeed. These techniques deserve disproportionate defensive attention regardless of your specific environment or threat profile.
Breadth can mask meaninglessness. T1059 (Command and Scripting Interpreter) is so broad and so widely used for legitimate purposes that claiming coverage for it, without specifying what that means, is virtually meaningless. Are you distinguishing adversarial use from administrative use? Simply having something mapped to that technique tells you very little about your actual defensive posture.
What this means for your program: Start by mapping your coverage priorities to your actual threat landscape. Which threat actors target your industry and geography? What techniques do they consistently use? Your threat intelligence should drive your prioritization, not technique count or framework completeness. A coverage gap in a technique no relevant adversary uses matters far less than a gap in one that shows up repeatedly in campaigns against organizations like yours.
Myth 2: 100% Coverage Is the Goal
We opened this post by telling you to be skeptical of 100% ATT&CK coverage claims. 100% is the wrong target.
ATT&CK was never designed to be a bingo card. The intent is to understand the threats to your organization, prioritize your efforts, and focus on the techniques most relevant to your environment and assets. Chasing 100% inverts that logic entirely.
Depth matters more than the number. Even for a single technique, having something is not the same as having good coverage. Take T1059 again — you may have a detection, but what level of depth does it actually provide? Does it meaningfully distinguish adversarial use from legitimate administrative activity? Having something for that technique tells you very little without understanding the depth behind it.
100% coverage creates SOC noise. If you try to detect everything, you’ll burden your analysts with alerts that don’t reflect real risk. A detection for every encrypted file might just prevent normal business operations. Focused coverage on what matters to your environment beats exhaustive shallow coverage.
Compensating controls are part of the picture. Sometimes the right answer isn’t more detections at all. If you’ve gotten really good at blocking lateral movement, that broader control may break adversary attack paths, effectively eliminating access to whole collections of techniques. Techniques don’t happen in isolation — they’re part of sequences, and often the right place to focus is the sequence, not the individual step.
Procedures are where real complexity lives. Procedures -the specific tools, commands, and variations an adversary uses -are the easiest thing for an adversary to change. If your coverage is focused at the procedure level, adversaries will evolve and evade your defenses. The more durable goal is detecting behavioral invariants, the things that don’t change regardless of how a technique is implemented. The Center for Threat-Informed Defense’s “Summiting the Pyramid” project offers methodology for identifying and targeting those invariants.
What this means for your program: Push back on the 100% goal of ATT&CK “Bingo” and instead identify the choke points in attack paths relevant to your environment and build deep, validated coverage there first.
Myth 3: Coverage Is Binary
Red and green ATT&CK heatmaps are everywhere, and they say little about defensive effectiveness.
When a technique is red, it looks like a problem. When it turns green, it looks solved. But ATT&CK coverage isn’t a switch. It exist on a spectrum that varies across techniques, and the gap between a green box and actual defensive capability can be enormous.
Coverage for a procedure is NOT coverage for a technique. Having a single detection for a single procedure doesn’t mean you’ve “covered” a technique. Coverage depends on which procedure was tested, on which platform, under which operating system, and whether the detection functions with real-world noise.
Coverage isn’t static. Environments change. Patches happen. A detection that worked last month may have silently broken this month. There’s a real need to continuously validate coverage, not just claim it once and move on.
One comment from a webinar attendee captured it perfectly:
“100% coverage, 0% response” – webinar attendee.Just having detections means nothing if they don’t produce signal that your team can act on. Heatmaps show gaps reasonably well. They don’t show depth, recency, or quality.
Transparency is critical. There is also a transparency problem in how coverage gets communicated. Organizations frequently don’t specify the level at which detection is actually occurring, whether it’s a specific procedure, a sub-technique, or a behavioral invariant. Without that clarity, coverage claims are nearly impossible to compare or trust. As a community, we need better norms for communicating not just what is covered, but at what level, in what conditions, and how recently it was validated.
What this means for your program: Build validation into your program as a first-class activity, not an afterthought. For each technique you call covered, know what “covered” means in your environment (whether it detects, alerts, blocks, or logs) and track when it was last validated. Coverage that hasn’t been tested recently should be treated as unknown, not green. The goal isn’t a better heatmap; it’s confident, current knowledge of where your defenses hold.
Myth 4: ATT&CK Covers All Adversary Behavior
This is perhaps the most important myth for practitioners to internalize — because the consequences of getting it wrong are invisible until they suddenly aren’t.
ATT&CK documents what has been publicly observed and reported. Techniques used by adversaries that haven’t been publicly disclosed aren’t in the framework. Researcher-proven techniques that haven’t been seen in the wild aren’t in the framework. And because ATT&CK is updated twice a year, there’s always a backlog of observed behaviors that haven’t made it into the public knowledge base yet.
Attackers evolve faster than ATT&CK. At any given moment, there are essentially three layers of adversary behavior that live outside ATT&CK: things the community has observed but live in the ATT&CK CTI backlog, things researchers have proven feasible but that haven’t been seen in the wild, and things that simply haven’t been reported publicly at all.
The decision to keep ATT&CK strictly limited to in-the-wild observations is deliberate, and it’s the right call. It keeps the framework grounded in what’s real and operationally relevant. But it means that using ATT&CK as a complete picture of what adversaries can do is a mistake.
Plan on extending ATT&CK. Many organizations maintain internal ATT&CK extensions to track behaviors not yet in the public matrix. The ATT&CK team maintains ATT&CK Workbench specifically to support that kind of extension, and it’s worth exploring if you’re not already using it.
The practical implication: a security program built exclusively around ATT&CK coverage will have blind spots by design. Supplementing with threat intelligence feeds and internal tracking isn’t optional.
What this means for your program: Treat ATT&CK as a floor, not a ceiling. Use threat intelligence feeds to track behaviors emerging before they reach the framework and consider using internal extensions for techniques relevant to your environment that aren’t yet in the public matrix. The question isn’t just “are we covered against what’s in ATT&CK?” … it’s “are we tracking what’s coming?”
Four Questions Worth Asking Any Vendor
Before accepting any ATT&CK coverage claim, push on these questions:
- Which techniques are covered, and on which platforms? Coverage that doesn’t map to your actual infrastructure is irrelevant coverage. A vendor strong on Windows endpoints provides limited value in a hybrid cloud environment. Push for specificity on platform, OS version, and deployment context.
- How was coverage validated, and how recently? Detections degrade silently. A log source config change, a system update, or a new data pipeline can invalidate a detection that worked last quarter with no visible indication. Ask for validation methodology, test environment, and date. If they can’t answer, the coverage claim is unverified.
- What does “covered” mean operationally? Does it detect, alert, block, or log? At what fidelity? A detection that fires in a lab against a known procedure but produces no actionable signal in a production environment with normal noise isn’t coverage; it’s a benchmark artifact. Ask how detections perform under realistic conditions.
- Why were these techniques prioritized? A vendor who can articulate a prioritization rationale, grounded in threat intelligence, relevant to your industry, tied to actual attack path analysis, is doing the real work. A vendor who answers with “we cover the most techniques” is optimizing for the metric, not the outcome.
Where Do We Go from Here?
If there’s one thread running through all four myths, it’s this: coverage is only meaningful when it’s grounded in your environment, your threats, and an honest accounting of depth and validation. A few practical principles to take away:
- Prioritize by threat model, not technique count. Know which threat actors are relevant to your industry and geography and focus your coverage on the techniques they actually use.
- Invest in depth at choke points before expanding breadth. Techniques that appear across many attack paths deliver the most defensive value when covered well. Start there.
- Validate continuously. A detection that hasn’t been tested recently may not be a detection at all. Coverage is a state that requires ongoing maintenance, not a box checked once.
- Expect coverage claims to come with context. A percentage without methodology behind it is a starting point for a conversation, not an answer. Understanding what was tested, how, and on which platforms is what makes a coverage claim meaningful.
The volume and quality of questions we received during the webinar was itself a signal. Practitioners are clearly wrestling with these issues, and the confusion around coverage claims is real and widespread. What’s missing is a shared framework; a community-agreed definition of what “covered” means, and practical guidance for making and communicating coverage decisions consistently. We see this as an open problem worth solving, and one the community has clearly shown interest in advancing.
Follow our blog for updates as we continue exploring the realities of ATT&CK coverage, threat-informed defense, and exposure validation or connect with us on LinkedIn if you’d like to help.
