V. Concluding Remarks
Artificial intelligence is fundamentally reconfiguring the landscape of conflict by breaking the state’s long-held monopoly on sophisticated violence and democratizing the tools of strategic harm. The epicenter of this transformation, I argue, is not the technology itself but the handful of research labs whose design decisions now serve as de facto international security policy. This is manifested in several ways. The lowering of the floor for catastrophic harm — from allowing fast and fluent creation of propaganda to the creation of malware — is the direct consequence of the dual-use tools these labs have chosen to build and release. The raising of the ceiling for the effects of this harm — from the safeguard-obviating potential of open-source BDTs to the weaponization of AI in commercial drones — is the direct consequence of their proliferation models. And the asymmetry of adaptation, which sees agile NSAs outpace state defenses, is not a natural phenomenon but an accelerated one, fueled by the technological and scientific progress these labs have driven.

One can simply look to history to find that mankind has oft wrestled with such dual-use dilemmas. The 20th century was defined by the struggle to control chemical precursors for explosives, nuclear materials, and advanced cryptography. The challenges presented by AI, however, are fundamentally different. The Crypto Wars of the 1990s, for instance, centered on a defensive technology, essentially a shield. States sought to regulate encryption because it created opacity. The dilemma of AI is more profound. The state is not merely attempting to regulate opacity; it is confronting the proliferation of agency. A chemical precursor, although dual-use, cannot design a new, more potent explosive. A cryptographic key cannot be an agentic partner in a data extortion campaign. The tools being released by AI labs are not simply knowledge in the traditional sense; in attempting to be a replicable form of ‘cognition,’ they introduce, potentially, a co-conspirator.

This qualitative difference demands a fundamental recalculation of our security paradigm. The traditional strategic framework — based on the principles of state-controlled power, identifiable adversaries, and domain-specific defenses — is no longer a viable model for confronting the challenges posed by the accelerating diffusion of power. When an internal debate at a private lab over a model’s release has such profound and immediate consequences for global security, and when the actor behind a catastrophic attack can become practically any adversary, the very concept of security has transformed. The challenge we face is not simply to defend against the weapons of tomorrow, but to build a security paradigm capable of enduring in an age where the power to cause catastrophic harm has been irrevocably democratized, and in which the sovereignty over their creation, definition, and proliferation now rests, in large part, within the walls of a few corporations whose policy decisions have become the most important, and least accountable, in the world.







This is a working text. Comments are welcome.

Max Berger
max@figureten.com