The trajectory of Artificial Intelligence is a subject of intense debate, especially concerning its impact on global power dynamics and national strategy. This critical discussion is often framed around three distinct camps of thought: the Sprinters, the Marathoners, and the Skeptics, each proposing fundamentally different visions for AI’s evolution and the appropriate policy responses to secure national interests.
The Sprinters embody the most aggressive perspective, believing that profound disruption via artificial general intelligence (AGI), defined as human-level intelligence, is imminent. Adherents, including prominent U.S. technologists and a segment of the national security community, argue that the country first to achieve AGI will secure enduring geopolitical advantages. This view necessitates an “all-out” race to acquire AGI, prioritizing massive investment in training AI models, hoarding advanced chips, and taking active measures to stunt competing developments.
In contrast, the Marathoners foresee AI’s diffusion as a selective, sector-by-sector process, marked by significant yet incremental improvements rather than a singular leap to AGI. This camp believes that AI will reshape key industries like finance, health, and transportation without necessarily triggering AGI by 2030. Their focus shifts from model training to optimizing AI workloads for inference, emphasizing widespread adoption and cost control in their Technology Strategy.
The Skeptics represent the most cautious approach, positing that AGI is decades away, if it ever arrives. Figures in this camp warn against overhyping AI, fearing misallocated capital, inflated energy demands, and potential financial contagion from an AI investment bubble. They advocate for prioritizing non-AI national security capabilities, committing scarce resources only where AI adoption yields indisputable benefits, and monitoring the Future of AI developments closely.
The diverse perspectives of these camps profoundly influence the appropriate AI Policy for resource allocation and talent management within the US National Security framework. Sprinters advocate directing all available resources to the existential AGI competition, even at the expense of other defense priorities, while Marathoners prioritize AI on a case-by-case basis. On talent, Sprinters push for immediate visas for elite AI talent, Marathoners scale domestic science and technology education, and Skeptics focus on broad digital-literacy upskilling to address AI’s societal risks.
Furthermore, these camps hold differing views on infrastructure and energy requirements for AI. Sprinters envision massive data campuses for training models, requiring rapid deployment of all new energy types. Marathoners, while also embracing an “all-of-the-above” energy approach, emphasize inference-directed generation, particularly solar and batteries, and long-term grid buildout. Skeptics, conversely, voice concerns about the risks of overbuilding potentially inefficient energy infrastructure for uncertain returns.
Alliance strategies also diverge significantly among the camps. Sprinters advocate for strict chip embargoes, believing AGI development will entrench U.S. leadership. Marathoners focus on building a long-term ecosystem of like-minded AI partners, prioritizing traditional allies while cultivating partnerships with “swing powers” and talent sources globally, reflecting the broader Geopolitics of technology. Skeptics, however, view allied chip controls as potentially counterproductive, hoping for Chinese AI cooperation rather than strict competition.
Ultimately, adopting the marathoner approach appears to best serve U.S. interests, offering a balanced and flexible path forward. While AI is already a powerful tool with growing capabilities, suggesting limitations to the skeptic camp’s caution, the near-term acquisition of AGI remains unlikely, diminishing the appeal of the sprinter’s “all-in” bet. The marathoner approach crucially allows the United States to dynamically scale its AI efforts, adapting to real-world developments while minimizing the risks of both overreach and underinvestment.
Regardless of which camp proves most accurate, U.S. policymakers should adopt universal, scenario-agnostic recommendations. This includes deepening full-spectrum cooperation between American AI companies and U.S. security services, maintaining access to high-skilled AI-relevant labor domestically and internationally, and carefully managing supply chain exposures. A pragmatic and adaptable approach to AI’s energy demands is also vital. By continuously recalibrating AI strategies based on the latest developments, the United States can ensure it remains competitive in what will be a defining element of the Sino-American technological competition.
Leave a Reply