This Defense Firm Developed AI Agents Capable of Causing Explosions

Like numerous companies in Silicon Valley today, Scout AI focuses on training extensive AI models and agents for automating tasks. The key distinction is that instead of coding, responding to emails, or shopping online, Scout AI’s agents are engineered to locate and eliminate targets in the physical realm using explosive drones.
In a recent demonstration conducted at a classified military facility in central California, Scout AI’s technology took control of a self-driving off-road vehicle alongside two lethal drones. These agents utilized the systems to locate a concealed truck in the vicinity, subsequently destroying it with an explosive charge.
“We must introduce next-generation AI to the military,” Colby Adcock, CEO of Scout AI, shared with me in a recent interview. (Adcock’s brother, Brett Adcock, leads Figure AI, a startup focused on humanoid robots). “We take a hyperscaler foundation model and train it to transition from a generalized chatbot or assistant to a warfighter.”
Adcock’s firm is part of a new wave of startups eager to adapt technologies from larger AI labs for military applications. Many policymakers assert that leveraging AI is crucial for future military superiority. The combat capabilities of AI are one reason the US government has aimed to restrict the sale of advanced AI chips and chipmaking tools to China, although the Trump administration recently opted to relax some of those restrictions.
“It’s beneficial for defense tech startups to innovate with AI integration,” states Michael Horowitz, a professor at the University of Pennsylvania who formerly served in the Pentagon as deputy assistant secretary of defense for force development and emerging capabilities. “That’s precisely what they should pursue if the US aims to lead in military AI adoption.”
However, Horowitz also points out that integrating the latest AI technologies can be particularly challenging in practice.
Large language models are intrinsically unpredictable, and AI agents—such as those managing the popular AI assistant OpenClaw—can misfunction even with relatively simple tasks like ordering online. Horowitz notes that demonstrating these systems’ robustness from a cybersecurity perspective may be especially challenging—something essential for military implementation.
Scout AI’s recent demonstration involved multiple stages where AI had autonomy over combat systems.
At the mission’s start, the following command was input into a Scout AI system known as Fury Orchestrator:
A relatively large AI model with over 100 billion parameters, capable of operating on either a secure cloud platform or an isolated on-site computer, interprets this initial command. Scout AI employs an undisclosed open-source model without restrictive measures. This model then acts as an agent, directing smaller 10-billion-parameter models running on the ground vehicles and drones involved in the operation. The smaller models also function as agents, issuing commands to lower-level AI systems managing the vehicles’ movements.
Moments after receiving the orders, the ground vehicle sped off down a dirt path winding through brush and trees. A few minutes later, the vehicle halted and deployed the pair of drones, which flew into the designated area where the target was reported to be located. Upon identifying the truck, an AI agent on one of the drones commanded it to approach and detonate an explosive charge just before impact.
