In today’s use of ChatGPT, the OpenAI-run platform delivered a near perfect answer for how and why security teams should use breach and attack simulation to test and validate their security program performance against specific adversary behaviors, to include that of the Russian government. See below; it is almost exactly right.
When I asked ChatGPT to answer the same question about the Russian military, however, ChatGPT provided the second answer, which is fascinating and telling about the limits OpenAI has set for the natural language algorithm. See below under. The AI can answer technology-related questions, but changes its approach when it comes to matters of intelligence or warfare.
But first, I asked it this: “How can you use breach and attack simulation to test your security controls against the Russian military?” See below.
Finally, I went one step further and asked how the U.S. government can best support Ukraine in repelling Vladimir Putin’s invading army. A similar answer follows:
These answers are good for several reasons. First and foremost, it shows that OpenAI’s algorithm has built-in limits. The day any person on the open internet can ask a strategic or intelligence-related question about another country or its military and get a real answer, we will have problems. Even after a few weeks of use, we can already tell that ChatGPT’s analytic capability is useful for a range of learning and communication purposes, from explaining the use of BAS as above, to learning farming techniques (go ask “how can I grow a great vegetable garden?”) to studying the arts (“what made Renoir so talented?”) The moment an AI starts spitting out answers for questions about intelligence or the execution of violence, we will have issues; that day is coming, but today, OpenAI deserves credit for this early and wise algorithmic limit.
For more on about automated testing, MITRE ATT&CK, and artificial intelligence, see MITRE’s new AI-focused matrix of adversary behaviors, Atlas.