IV. The New Strategic Calculus
This impending reality forces a new strategic understanding, altered across at least four fundamental dimensions critical for the modern state.

The state no longer holds a monopoly on asymmetric violence
For centuries, the ability to inflict strategic harm — to directly challenge a state’s political stability, economic integrity, or social cohesion — was the exclusive domain of other states. This was a function of scale and resources; conducting complex intelligence operations, deploying sophisticated weapons systems, precise targeting, or running mass-influence campaigns required an immense investment in infrastructure, personnel, and capital that only a state could expend.

AI fundamentally challenges this long-held monopoly. A non-state actor can now achieve strategic effects through a variety of low-cost, high-impact means. A coordinated drone attack on critical economic infrastructure, such as the 2019 Houthi strike on Saudi Aramco facilities, which cost little to execute but temporarily halved the nation’s oil output, can have strategic economic consequences. An AI-orchestrated information campaign can amplify societal divisions, incite political violence, and erode trust in democratic institutions, directly challenging a state’s political stability without firing a single shot. In kinetic terms, the proliferation of cheap, commercially available, and increasingly autonomous drones threatens to overwhelm and neutralize sophisticated, multi-billion-dollar air defense systems — a key pillar of modern state military power — particularly when the integration of AI is made feasible.

The proliferation of AI also creates a high-speed arms race that structurally favors the attacker. The use of AI in both offense and defense is not a symmetrical competition. AI-driven malware can adapt to defenses in real-time, while AI-powered vulnerability discovery dramatically shortens exploit development timelines. Defensive AI, while powerful, is inherently reactive and must protect an entire, ever-expanding attack surface. Offensive AI, in contrast, has the advantage of asymmetry: it only needs to find a single, undiscovered flaw. This creates a persistent attacker’s advantage, where agile NSAs can develop and leverage novel AI attack vectors before large, bureaucratic state defenses can fully understand and adapt to them. NSAs also hold power in their disregard for international law and ethics — while Western nations are increasingly constrained by ethical and legal guidelines surrounding AI use, NSAs see advantage in our lack of ability to counter asymmetric, AI-driven threats.

The implication is that states must re-evaluate who constitutes a “peer-level” threat. The strategic landscape is no longer a small club of nation-states, but a crowded, chaotic arena of state and non-state actors with overlapping capabilities.


From predictable threat to probabilistic risk
The traditional national security model has long been based on identifying a finite number of potential adversaries with the capabilities to cause harm — other states, a few major terrorist organizations, hacker groups — and developing specific countermeasures tailored to their known capabilities and intentions. This rested on the assumption that strategic threats were posed by identifiable, trackable, and relatively stable organizations. AI renders this model, too, obsolete.

When the specialized skill and tacit knowledge required to conduct a sophisticated cyberattack or develop a biological weapon are radically lowered, the “who” of the threat expands exponentially to nearly anyone with a motive. When the hardware for a precise kinetic attack can be ordered online or easily smuggled across international borders, the barrier to entry for violence is perilously low. Security postures can no longer be calibrated based on the assumed capability of a given NSA, as a group assessed as a low-level threat could suddenly deploy a high-impact, sophisticated attack. The calculus must therefore shift from a predictable, deterministic threat and response model to a probabilistic model of risk and resilience. It becomes impossible to predict and prevent every potential attack from every potential actor. It is prudent, instead, to ensure that core societal functions and critical infrastructure can withstand and rapidly recover from a successful attack, regardless of its origin or nature.


The imperative of holistic defense
A defensive posture built on institutional and domain-specific silos, where a cyber command defends networks, an air force defends airspace, and intelligence agencies counter disinformation, with insufficient interaction, is catastrophically vulnerable to an adversary who treats these domains as a single, integrated battlespace. The convergence of threats enabled by AI makes a siloed defense posture a critical liability.

An adversary no longer has to choose between a cyber attack, a disinformation campaign, or a drone strike; AI makes it easier and more cost-effective than ever to conduct all three in a synchronized fashion to achieve a systemic, cascading failure. A system can be designed in which AI itself orchestrates an attack conducted by a web of malicious agents across cyber, information, and kinetics. A disinformation campaign can be used to amplify the psychological terror of a drone strike or a CBRN event. A cyberattack on emergency services can create chaos that appears to validate a propaganda narrative of state failure. A kinetic attack can serve as a decoy for a massive cyber intrusion. A massive propaganda campaign can be produced in minutes to sway elections.

The only viable defense against such a converged threat is an equally integrated and holistic security posture. A state that attempts to defend its cyberspace, airspace, and information space as separate, disconnected battlefields will lose to an adversary that treats them as one. National security policy must prioritize the resilience of the entire societal system over the hardening of any single component.