Smack Technologies, spearheaded by former Marine Special Operations commander Andy Markoff, secured $32 million in funding this week to engineer specialized AI models designed to surpass the strategic capabilities of Anthropic’s Claude in military theaters. By focusing on mission planning and tactical execution, the startup positions itself as a direct response to the ethical restrictions imposed by mainstream AI labs, aiming to provide the U.S. Department of Defense with high-fidelity “decision dominance.”
The Tactical Edge: Training the AlphaGo of the Battlefield
Unlike general-purpose large language models (LLMs), Smack’s architecture leverages a reinforcement learning process reminiscent of Google’s AlphaGo. The startup trains its models through thousands of simulated war game scenarios where expert analysts provide feedback signals, teaching the AI which strategies yield the highest probability of success. While Smack lacks the multi-billion-dollar treasury of frontier labs, Markoff confirms the company is allocating millions to refine its initial models, prioritizing depth of military logic over breadth of general knowledge.
The leadership team brings a unique blend of combat experience and commercial tech expertise. Markoff, who executed high-stakes operations in Iraq and Afghanistan, co-founded the venture alongside fellow ex-Marine Clint Alanis and Dan Gould, a computer scientist and former VP of technology at Tinder. This combination aims to bridge the gap between abstract code and the brutal realities of physical conflict.
Strategic Divergence: Why Anthropic Failed the Pentagon
The emergence of Smack Technologies follows a high-profile friction between the Department of Defense and Anthropic. A potential $200 million contract collapsed after Anthropic executives sought to restrict their models from being integrated into autonomous weapons systems. This breakdown led Defense Secretary Pete Hegseth to categorize Anthropic as a potential supply chain risk, creating a vacuum for more “military-first” AI developers.
Markoff argues that the current debate over AI ethics often misses a fundamental technical truth: general LLMs are fundamentally ill-equipped for the front lines. While models like Claude excel at synthesizing text and summarizing reports, they lack a grounded understanding of the physical world. “I can tell you they are absolutely not capable of target identification,” Markoff asserts, noting that general AI cannot yet reliably control hardware or navigate the chaos of a kinetic environment.
Automating the “Drudgery” of Mission Planning
Despite the sci-fi tropes of fully automated warfare, Smack’s immediate value proposition lies in replacing whiteboards and notepads. Current military planning remains a manual, labor-intensive process. Smack aims to automate the logistical and tactical “drudgery,” allowing commanders to generate and iterate on complex mission plans at superhuman speeds. In a conflict against “near-peer” adversaries like Russia or China, this speed—often called decision dominance—could prove more decisive than traditional firepower.
The Autonomy Debate and the Risks of Escalation
The transition to AI-assisted warfare is not without significant peril. While the U.S. and 30 other nations already utilize autonomous systems for high-speed missile defense, expanding AI into the “kill chain”—the sequence of steps leading to the use of lethal force—remains a flashpoint for critics. Rebecca Crootof, an expert on autonomous weapon law at the University of Richmond School of Law, notes that many systems currently deployed already meet the definition of “fully autonomous.”
However, technical reliability remains the primary hurdle. Research from King’s College London recently demonstrated a disturbing trend: LLMs used in war games tended to escalate conflicts toward nuclear engagement. Anna Hehir, head of military AI governance at the Future of Life Institute, warns that AI is currently too “unpredictable and unexplainable” for high-stakes combat. She argues that these systems struggle to distinguish between active combatants and non-combatants, such as children or surrendering soldiers.
The Reality of Friction in Digital Warfare
Markoff acknowledges that no AI, regardless of its sophistication, can fully account for the “fog of war.” Drawing from his experience in special operations, he notes that real-world missions rarely follow more than 50% of the original plan. “That’s not going to change,” Markoff admits, suggesting that while AI can optimize the starting point, the inherent chaos of conflict will always require human intervention and ethical oversight by those in uniform.
