‘The history of revolutions in warfare has shown they are won by those who uncover the most effective ways of using new technologies, not necessarily those who invent the technology first’. — Paul Scharre

I. Introduction
The proliferation of advanced artificial intelligence (AI) is rapidly reshaping the global security landscape, and nowhere is this transformation more destabilizing than in its adoption by non-state actors (NSAs). The epicenter of this shift is not just the arrival of new tools, but a fundamental change in the nature of conflict itself. This is heralded by a handful of research labs whose design decisions, safety trade-offs, and release models now serve as a form of de facto international security policy. Their internal debates over a model’s capabilities and its release structure are, in effect, private arms-control negotiations that have immediate, public, and global security consequences.

This new reality is driven by an asymmetry of adaptation. Agile NSAs operate without the bureaucratic friction and ethical and legal constraints that bind states. They can adopt, modify, and deploy new technologies at the speed of innovation. In today’s world, AI acts as an evolutionary accelerant for agile non-state actors while often becoming a bureaucratic obstacle for the states that must defend against them. Capabilities once confined to the arsenals of well-resourced nation-states are becoming increasingly accessible through commercially available AI tools. This democratization is not just a quantitative shift, providing more actors with more tools, but a qualitative leap, empowering a more diverse set of NSAs — from radicalized individuals to sophisticated criminal organizations — to challenge established power dynamics with a level of sophistication previously reserved for nation-states.

The prevailing scholarship often focuses on state-centric competition as a driver for this shift. This, however, mistakes the effect — a state-level AI arms race — for the cause, the underlying proliferation of the technology itself. In this paper, I argue that the primary driver of the new, increasingly asymmetric threat landscape is the AI development ecosystem. The design, training, and release of frontier models — both closed and open-source — are directly lowering the floor for NSAs to conduct catastrophic attacks and, in time, will raise the ceiling of damage they can inflict.

This paper will analyze this transformation in three parts. First, it will examine how the dual-use capabilities embedded in lab-built models — from large language models (LLMs) to biological design tools (BDTs) — are democratizing the capabilities for harm. Second, it will explore the inherent risks, evaluating how the synthesis of AI capabilities creates qualitatively new threats. It will conclude with an assessment of how these transformations force a fundamental re-evaluation of the strategic calculus for the modern state.