The 5 Keys to Success in Evaluating a Security Validation Platform

I am routinely asked what the key areas of success are for an Enterprise to evaluate a security validation platform that can objectively validate their security controls, produce the proper evidence and enable strategic business decisions. To answer that important question, I’ve put in place 5 keys to success in… Read More

I am routinely asked what the key areas of success are for an Enterprise to evaluate a security validation platform that can objectively validate their security controls, produce the proper evidence and enable strategic business decisions. To answer that important question, I’ve put in place 5 keys to success in evaluating a security validation platform that will drive a data-driven security strategy. Additionally, I will be expanding on these 5 areas in future blogs.

#1 Stay Technology Agnostic and Independent

The goal of a continuous security validation technology is to provide objective evidence of your capabilities and gaps. That evidence allows you to use data to drive your strategic decisions and improve your security baseline. It is crucial for that technology to be independent of any one vendor in order to not produce biased results.

Example: Because a validation technology provides results that key decision makers use to reduce business risk, the testing and validation process must be fair and impartial. Imagine using a validation technology to validate 5 different EDR solutions for POC comparison and selection, but the validation technology itself is owned by one of the EDR vendors. Based on the notion of fairness (linked above) it is unlikely to produce impartial, technology-agnostic and objective results that would not favor the related EDR vendor and validation technology. 

AttackIQ Strategy and Position: We stand by our technology-agnostic position and impartial and objective results and are currently the largest independent Continuous Security Validation (CSV) vendor in the market.

#2 Use a Platform vs. Tool

When you validate your security controls, using a platform vs a tool, will bring you more effective time to value. While tools merely run in your environment and multiple tools can be leveraged to bring incremental benefit, platforms integrate into your environment, connecting into your technologies and processes, making deployment easier and enabling you to extend the capabilities to fit your needs.

Example: To use an analogy, a tool is like a hammer, whereas a  platform is a toolset you can use to build the entire house. When you use open source or free tools in your environment to test your security stack, they provide value for a single job, but a single tool won’t build a house. Many organizations today want to validate their security assumptions, but using tools and people along, can assess less than 1% of most security investments and technology stacks. A platform on the other hand not only performs the testing but measures the detection, prevention, response capabilities of all technologies, processes and pipelines and helps articulate and communicate those results to the right stakeholders, providing baseline metrics and historical evidence of improvement. In addition, with a platform you can add to your toolset, by adding new tests, new integrations and new methods of communication or reporting. A tool will show value, but a platform will provide a positive business impact.

AttackIQ Strategy and Position: The architecture of the AttackIQ platform was designed from the ground up and built around an API-first approach, easily integrating into your environment while providing an open platform to build your own scenarios, assessments, reports, technology integrations and other valuable content to fit your needs. Whether you have traditional or bespoke technologies as part of your security stack, you will be able to validate your entire pipeline, from people, process, tools, technologies and processes.

#3 Rely on Emulation over Simulation

Emulation is the process of mimicking observable behavior and replicating such behavior within a real environment such as bare-metal, virtual or cloud environments. Production environments can be used to emulate tactics, techniques and procedures (TTPs) AKA attacker behavior to measure the entire security stack and controls from endpoint, network, identity, data, email, cloud and other perspectives. Simulation, on the other hand, has many shortcomings in that an environment must be created and kept up-to-date that replicates the real environment including real security controls. In theory, emulated and simulated environments should be the same, but in practice, simulated environments drift from real environments very quickly. Emulation can always take place within a simulated environment but the opposite isn’t true.

Example: There exist two environment models for validating your security controls, one where you create a simulated environment, replicating your key endpoint and network controls and one where you emulate safely key behaviors on your production environments. Simulation requires you to make sure you keep in sync this replicated environment, and in many cases, you can use a gold disk for the endpoint controls, but rarely will you truly mimic the real endpoint controls which have configurations and capabilities that degrade over time given the typical processing from users and applications. Emulation, on the other hand, allows an organization to safely test attacker behavior against real production machines that are the single point of truth of endpoint, network, cloud, identity, data and cloud control efficacy at that point in time. We have had multiple customers who have tested wannacry scenarios in simulated environments to find out they were testing in a “perfect” environment and thus the results were not reflective of the production environment. Simulated Lab environments serve as a good environment for proof of concepts but emulation throughout a sampling of your production environment is the only true reflection of your environment.

AttackIQ Strategy and Position: We believe that running scenarios within a production environment in a safe, do-no-harm fashion to emulate attack behavior will provide true visibility throughout your entire organization. That being said we support both emulation within production environments, as well as simulated virtual labs.

#4 Validate against a framework

Measuring against a standardized framework allows your organization not only to have a common lexicon to drive discussions from technical to strategic, but have a common set of agreed upon criteria to validate against as opposed to a moving target. A security validation platform must include the ability to map against frameworks like MITRE CAPEC or MITRE ATT&CK which in recent years has become the defacto when measuring defensive capabilities against adversarial behavior. Detection, Prevention and Response Technology Vendors as well as Enterprise organizations have accepted MITRE ATT&CK as a leading framework and

Example: There exist various best practices and frameworks such as NIST CSF and in recent years Lockheed Martin released the Cyber KillChain®, but no similar framework has had as much of a rise to industry fame and acceptance as the MITRE ATT&CK Framework. Today we see executive leadership wanted to validate their defensive strategy and security investments against the MITRE ATT&CK framework, for the key reason that they want to validate against an industry accepted framework. It’s not enough to have a company specific standard to measure against, with it’s own unique lexicon and subjective tests. MITRE ATT&CK has provided a standardized knowledge base of attacker behavior that you can use to map your security controls against and validate.

AttackIQ Strategy and Position: We are deeply committed to helping organizations operationalize MITRE ATT&CK and as such, we have implemented scenarios for more of the tactics and techniques within the MITRE ATT&CK framework than any other CSV platform in the market, to allow you validate your security controls against an industry standard.

#5 Gain Deep insight from Technology integrations + Threat modeling

In the last five years since AttackIQ emerged, the market has bifurcated into two general camps, camp #1, that can perform security controls validation by providing integrations into the security stack to measure detection, prevention, and response and camp #2 centered around threat modeling. Camp #2 typically served value for strategic leadership/secops and IT and camp #2 typically provided value for red and/or purple teams.

Example: Typically we see two immediate needs when an organization wants to discuss BAS specifically or Continuous Security Validation. 

#1 The security team have a particular attack like dumping credentials using mimikatz or eternalblue and they want to perform a gap analysis exercise around taking relevant threats and modeling those threats in their organization. To accomplish this a security control validation platform needs to have a well-organized set of attack scenarios to select and run. Those scenarios need to realistically model the multiple phases of an attack in order to understand the emulated blast radius and how far such an attack would be able to go given a particular originating endpoint. This is the opportunity to see the attack from the attacker’s perspective.

#2 Strategic leadership, SecOps, SOC and/or IT want to validate that a particular technology e.g. Carbonblack Defense is configured correctly and has the capability to detect/prevent/alert against either a known set of threats or a known set of assumptions. In this case, each security capability is specifically tested. This requires being able to not only run a scenario but integrate into both the specific technology and/or log aggregation point to measure and validate accurately. This requires that the security validation technology have technology integrations available. 

AttackIQ Strategy and Position: We aim to excel and deliver value for both security control validation and threat modeling and we believe our architecture and strategy will lead our customers towards business outcomes, acting as a decision support system and ultimately driving insight that will reduce risk and improve your business.

Additional thank you to AttackIQ team members who helped provide input: Brandt Mackey, Andrea Swaney and Tin Tam.