GAO Finds Gaps in DoD Cyberdefenses, Highlights Importance of Breach and Attack Simulation Tools

AttackIQ’s Security Optimization Platform gives an agency a proactive—rather than a reactive—security posture. It enables continuous validation of security controls to definitively establish the effectiveness of key initiatives, to include zero-trust controls that prevent adversaries from moving laterally across a network, as in the case of SolarWinds. Read More

by Stacey Meyer, Vice President, Federal Operations, AttackIQ

In December 2020, the Government Accountability Office (GAO), which provides audits and investigative services for the U.S. Congress, released the results of a study of 15 major Department of Defense (DoD) IT programs. The goal of the study was to gauge the effectiveness of the Defense Department’s $36.1 billion in annual IT investments. The results are a little bleak.

Two-thirds of the major IT programs studied (10 of the 15) missed deadlines. One was running a full five years behind schedule. In addition, cybersecurity practices were inconsistent, at best. “With respect to cybersecurity, programs reported mixed implementation of specific practices,” the executive summary reads. The GAO report speculates that these two findings might be related: “According to the DOD Cybersecurity Test and Evaluation Guidebook, programs that do not perform developmental testing are at an increased risk of cost and schedule growth and poor program performance.”

The Cybersecurity Gap: Missing Assessments

One area in which the DoD IT programs lagged is cybersecurity risk management. Respondents from six of the programs reported they do not conduct any cybersecurity vulnerability assessments (one respondent didn’t know whether their program does). This means that managers of only 8 of the 15 major programs actually know where the security control gaps are in their IT programs.

Along the same lines, the GAO study found gaps in security testing. DoD instructions 5000.75 and 5000.02 require that major IT programs complete both developmental and operational cybersecurity testing. Nevertheless, 4 of the 15 programs in the GAO study failed to conduct operational cybersecurity testing (including cooperative vulnerability and penetration assessments and adversarial testing). And 9 of the 15 failed to conduct developmental cybersecurity testing. In fact, the GAO study found that only three programs conducted adversarial assessments as part of their development process.[1]

What are the implications of these testing gaps? The GAO report notes some interesting correlations. Among the eight programs that do conduct vulnerability assessments, only four experienced a delay in program schedule, while four experienced no delay. Among the six programs that do not perform vulnerability assessments, 83% experienced a delay in program schedule; only one experienced no delay. *Note that one study participant did not know whether their program performed vulnerability assessments. 

The report states that the programs which include vulnerability assessments “experienced fewer increases in planned program costs and fewer schedule delays relative to the programs that did not report cybersecurity vulnerability assessments.”

Worse, failing to effectively test the security environment prevents an agency from understanding how well its infrastructure is actually protecting IT assets. It’s impossible to know whether security controls are effective and security solutions are performing as advertised unless the organization is running breach and attack simulations (BAS).

Even routine BAS testing—say, once a year—is not adequate. An attacker might be able to navigate around the network for weeks or months before the breach is detected. Consider the SolarWinds attack. Attackers entered the SolarWinds infrastructure in September 2019, deployed malicious code through June 2020, and were discovered in December 2020. SolarWinds shows us how advanced nation-state actors have the financial resources, personnel, and time to invest in novel methods of intrusion; they will constantly work to find new ways to break in. This makes it vital for organizations to continuously test and validate their post-breach defenses to stop lateral movement and other post-breach tactics and techniques that are described in the MITRE ATT&CK® framework.

Automated BAS Testing Enables Proactive Defense

The AttackIQ Security Optimization Platform is built to give users insight into how well their security controls are working, through adversarial testing that most IT programs in the GAO study lacked. The AttackIQ platform emulates adversarial tactics and techniques described by the MITRE ATT&CK framework to test and assess security controls effectiveness.

Automation makes testing routine, so that if something changes in the environment that impacts a security control’s effectiveness, security staff recognize the issue much more quickly. With automated assessments running continuously, the organization will recognize a new control issue the next time the automated test runs. Once security staff are aware of the issue, they can drill down into what changed in the environment and remediate the problem.

In this way, AttackIQ’s Security Optimization Platform gives an agency a proactive—rather than a reactive—security posture. It enables continuous validation of security controls to definitively establish the effectiveness of key initiatives, to include zero-trust controls that prevent adversaries from moving laterally across a network, as in the case of SolarWinds.

More Efficient Procurement, DevOps, and Compliance

After deploying the AttackIQ Security Optimization Platform for continuous security validation, a DoD agency can reap other benefits as well. During a vendor selection due diligence process for prospective security purchases, the Security Optimization Platform provides visibility into how the defensive technologies under consideration would function. An apples-to-apples bakeoff shows how different options would perform in real-world attack situations, which reduces the chance that the selected solution turns out to be less effective than expected. Adding AttackIQ to the due diligence process reduces the chance of waste in security technology investments, speeding up procurement cycles and ensuring that the agency can make better security investment decisions going forward.

The AttackIQ Security Optimization Platform also provides insight into government off-the-shelf (GOTS) software security control effectiveness. Regular testing throughout the development process ensures that any gaps can be patched before the agency begins to use the GOTS solution. And adding AttackIQ testing to the continuous integration/continuous delivery (CI/CD) process helps an agency identify—and patch—any security issues with new releases/updates of internally developed software, before they’re in production.

Another benefit of deploying the AttackIQ Security Optimization Platform is its support of contractors’ achieving DoD’s Cybersecurity Maturity Model Certification (CMMC) compliance. The platform includes new assessments specifically designed to validate CMMC compliance. These assessments enable contractors to use specific adversary behaviors from the AttackIQ library to guide testing and produce evidence of performance in relation to CMMC standards.

The AttackIQ Security Optimization Platform has 26 specific solutions for a security program identified so far, from controls assessments to compliance mapping to exercise enablement and analyst training.  Agencies that leverage the Security Optimization Platform build an evidence-supported knowledge base of the effectiveness of their security controls.

Deploying the Security Optimization Platform and the MITRE ATT&CK framework to structure and prioritize the frequency of simulations is the best way for an agency to strengthen its security against real-world threats. In a world of known threat behaviors, this is best practice—and should be imperative for agencies.

 

[1] The GAO defines an adversarial cybersecurity developmental test as “a cybersecurity developmental test and evaluation activity that uses realistic threat exploitation techniques in representative operating environments.”