← Back to Hub Next: Module 08 →
Week 2

The New Ideological Map of AI Power

Competing Frames and Worldviews

Two Questions That Precede Policy

Before any AI policy can be formulated, two fundamental questions must be answered. These questions reveal the deep ideological divides shaping the global AI governance landscape.

1. What is the unit that must not lose?

  • State Power — National sovereignty and strategic advantage
  • Civilization/Regime Order — Cultural and institutional continuity
  • The Species — Human survival and flourishing
  • The Firm/Platform — Corporate competitive advantage
  • The Social Contract — Democratic legitimacy and welfare

2. What is truly scarce?

  • Compute lead — Realists' focus on hardware advantage
  • State capacity — Restorationists' bureaucratic reform
  • Time/speed — e/acc's urgency imperative
  • Capital + scale — Corporate Utopians' resource concentration
  • Control/verification — Safety advocates' alignment focus
  • Legitimacy — Dislocation frame's social cohesion concern

Six Competing Ideological Frames

Realist

National Power

Restorationist

State Capacity

e/acc

Acceleration

Corporate

Utopian Scale

Safety

Scientific Int'l

Dislocation

Labor & Legitimacy

Frame 1

The Realist Frame

AI as National Power Amplifier

Core Assumptions

  • 1 Relative advantage matters. AI capabilities are not just about absolute progress but about maintaining lead over adversaries.
  • 2 AI as general-purpose accelerant. AI enhances all domains of national power—economic, military, and technological.
  • 3 Verification failure → worst-case thinking. Since AI capabilities cannot be reliably verified, assume adversaries are ahead.

Policy Instruments

Compute Denial

Chokepoints on advanced semiconductors and cloud infrastructure

State-Anchored Scaling

War Economy model for AI development

Genesis Mission

Nov 2025 — Manhattan Project model for frontier AI

Replicator Initiative

Attritable autonomous systems at scale

Alliance Management

Coordinating compute access among trusted partners while denying adversaries

Internal Contradictions

Diffusion catches up. Technology spreads; compute denial is temporary.
Security dilemma. Actions to increase security decrease it by provoking responses.
Private chokepoints. Key capabilities reside in private firms, not state control.
"War-winning capability over safe capability"
— Elbridge Colby, The Strategy of Denial
"AI arms control may be more difficult than nuclear arms control because verification is nearly impossible."
— Henry Kissinger et al., The Age of AI (2021)

Key Proponents

  • Elbridge Colby (Defense strategist)
  • Jake Sullivan (National Security Advisor)
  • NSC AI Task Force
Frame 2

Tech-Right / Restorationist

From Libertarian "Exit" to Restorationist "Capture"

The Ideological Shift

The tech-right has undergone a profound transformation: from Silicon Valley libertarianism advocating "exit" from government (building parallel systems) to a restorationist posture seeking to "capture" and reform state institutions from within.

OLD: Libertarian Exit

Build outside the state. Seasteading, crypto, private cities.

NEW: Restorationist Capture

Reform the state. Bureaucratic modernization, institutional takeover.

The Enemy is Internal Weakness

For restorationists, the primary threat is not external adversaries but internal institutional decay—bureaucratic sclerosis, regulatory capture, and the erosion of state capacity to execute ambitious projects.

Core belief: Legitimacy is not just procedural but cultural and coercive. A state that cannot execute has lost its right to rule.

Policy Instruments

DOGE Modernization Agenda

Department of Government Efficiency — AI-driven bureaucratic reform and cost reduction

"Woke AI" Executive Orders

EO 14179, EO 14319 — Removing DEI constraints from federal AI procurement

Public-Private Fusion in Defense

Deep integration of tech firms into national security apparatus

Palantir's "Software of Sovereignty"

AI platforms for state surveillance, immigration enforcement, and military operations

"We have an affirmative obligation to weaponize our technology in defense of the West."
— Alex Karp, CEO Palantir

Key Figures & Organizations

  • Alex Karp (Palantir CEO)
  • Marc Andreessen
  • DOGE (Department of Gov. Efficiency)
  • Heritage Foundation
  • Project 2025

Key Dates

2024 DOGE announced
Jan 2025 EO 14179 signed
Apr 2025 EO 14319 expands scope
Frame 3

The Acceleration Frame (e/acc)

Speed Saves Lives

Core Tenets

Speed Saves Lives

Every day of delay costs lives that AI could have saved through medical, scientific, and material progress.

Progress as Solver

Technological progress is the only scalable solution to humanity's problems.

Delay is Harm

Opportunity cost of slowed progress exceeds speculative risks.

Symmetric Risk

Risk is symmetric; stagnation is certain while AI risks are speculative.

Political Economy Vision

Permissionlessness

Remove regulatory barriers to innovation. Build first, regulate later if at all.

Abundance Dissolves Conflict

Post-scarcity from AI eliminates distributional conflicts that drive politics.

Case Study: SB 1047 Fight

California SB 1047 VETOED

California's frontier AI safety bill would have required pre-deployment safety testing for large models. The e/acc coalition mobilized intensely against it.

Introduced: Feb 2024
Vetoed: Sept 2024

Stress Tests for e/acc

Tail risks: What if the worst-case scenarios materialize?
Infrastructure reality: Can energy and compute scale fast enough?
Legitimacy backlash: Will democratic publics accept this framing?
"We believe that there is no material problem—whether created by nature or by technology—that cannot be solved with more technology."
— Marc Andreessen, The Techno-Optimist Manifesto (2023)

Key Figures

  • Marc Andreessen (a16z)
  • Ben Horowitz (a16z)
  • Beff Jezos (pseudonymous)
  • Guillaume Verdon

Core Slogans

"Accelerate or die"
"Progress is inevitable"
"Regulation is stagnation"
Frame 4

Corporate Utopian Frame

AGI for Humanity

Three Core Arguments

1

Scale is Destiny

More compute, more data, more parameters = emergent capabilities. Scale wins, and scale is expensive.

2

Containment Requires Control

Only those who build AGI can ensure it's safe. Centralized development enables safety research.

3

Governance is Engineering

Social problems can be solved through technical solutions. Alignment is an engineering challenge.

The Infrastructure Trap

The economics of AI infrastructure create powerful centralizing forces. Training frontier models requires billions in compute, creating barriers that only the largest entities can cross.

Stargate Project $500B

Announced January 2025 — Joint venture between OpenAI, SoftBank, Oracle, and others to build AI infrastructure in the United States.

Data centers Power generation Semiconductor fabs

Corporate Utopian Fears

Uncontrolled Diffusion

Open-source models enable misuse by bad actors

State Overreach

Government regulation slows innovation and captures technology

"In the next couple of decades, we will be able to do things that would have seemed like magic to our grandparents."
— Sam Altman, The Intelligence Age (2024)

Key Organizations

  • OpenAI
  • Anthropic
  • Google DeepMind
  • Microsoft AI

Key Figures

  • Sam Altman (OpenAI)
  • Dario Amodei (Anthropic)
  • Demis Hassabis (DeepMind)
  • Satya Nadella (Microsoft)
Frame 5

Safety / Scientific Internationalist

The Only Anti-Accelerationist Frame

Core Beliefs

Frontier AI as Species-Level Hazard

Advanced AI poses existential risks comparable to nuclear weapons and pandemics.

Alignment Problem Not Solved

We do not currently know how to ensure AI systems pursue human-intended goals.

Worst Harms are Irreversible

Existential catastrophes cannot be undone. Precaution is warranted.

Races Degrade Safety

Competitive pressure reduces time for safety testing and increases corner-cutting.

Risks are Transboundary

AI safety requires international coordination. No single nation can solve this alone.

Political Defeat (2023-2025)

Nov 2023 DEFEAT

OpenAI Boardroom Coup

Sam Altman briefly fired over safety concerns, then reinstated with board overhaul

Sept 2024 DEFEAT

SB 1047 Vetoed

California's frontier AI safety bill blocked by Governor Newsom

June 2025 DEFEAT

AISI → CAISI Reorganization

US AI Safety Institute reorganized and deprioritized under new administration

International Coordination Efforts

Bletchley Declaration

Nov 2023 — 28 nations agree on AI safety principles

Seoul Statement

May 2024 — Frontier AI safety commitments

International Network

Ongoing — AI Safety Institutes coordination

"The development of full artificial intelligence could spell the end of the human race."
— Stephen Hawking (2014)

Key Figures & Organizations

  • Stuart Russell (UC Berkeley)
  • Max Tegmark (FLI)
  • Future of Life Institute
  • Center for AI Safety
  • Anthropic (safety-focused)

Key Documents

  • "Existential Risk from Artificial General Intelligence" (Bostrom, 2012)
  • "Concrete Problems in AI Safety" (Amodei et al., 2016)
  • "Managing AI Risks in an Era of Rapid Progress" (Bengio et al., 2023)
Frame 6

The Dislocation Frame

Labor, Legitimacy, and Social Order

Core Concerns

Economic Redundancy

AI will make large segments of the workforce economically redundant, decoupling growth from labor.

Legitimacy as Constraint

The pace of AI deployment is constrained by social and political legitimacy, not just technical feasibility.

Factor Share Shifts

Automation shifts returns from labor to capital, increasing inequality unless redistributive policies intervene.

Status Order Disruption

AI undermines the social status attached to cognitive work, threatening social cohesion.

Expert Warnings

"AI will hit white-collar workers like a tsunami. Entry-level roles will disappear first."
— Dario Amodei, CEO Anthropic
"AI is about to hit the global labor market like a tsunami. We need urgent preparation."
— Kristalina Georgieva, IMF Managing Director (Jan 2026)

Policy Instincts

Taxation

Tax AI-derived productivity to fund redistribution

UBI / UBS

Universal Basic Income or Services

Labor Innovation

New job categories, reskilling programs

Human-in-the-Loop

Mandated human oversight requirements

Key Figures & Organizations

  • Daron Acemoglu (MIT)
  • Erik Brynjolfsson (Stanford)
  • IMF (labor market analysis)
  • ILO (future of work)
  • Labor unions (AFL-CIO, etc.)

Key Statistics

300M

Full-time jobs at risk globally (Goldman Sachs)

7% of US jobs

At high risk of automation (McKinsey)

$4.4T

Annual productivity gain potential (McKinsey)

Related Concepts

Technological unemployment Polarization Precariat Post-work society Digital divide

Comparative Framework Analysis

Dimension Realist Restorationist e/acc Corporate Safety Dislocation
Unit to Protect State Civilization Progress Humanity Species Workers
Scarce Resource Compute lead State capacity Time Capital Control Legitimacy
Primary Threat Rival states Bureaucracy Stagnation Misuse Misalignment Inequality
Policy Stance Compete Reform Accelerate Scale Regulate Redistribute
International View Alliances Hegemony Borderless Markets Cooperation Varies

Coalition Network Map

National Security Coalition

Realists + Restorationists
Focus: State power, defense
Shared: China competition

Tech Acceleration Coalition

e/acc + Corporate Utopians
Focus: Speed, scale, innovation
Shared: Anti-regulation

Governance Coalition

Safety + Dislocation
Focus: Risk, labor, equity
Shared: Precaution, legitimacy

Key Tension Lines

Speed vs. Control
Realists want controlled advantage; e/acc wants maximum velocity

State vs. Market
Restorationists want state capacity; Corporates want private control

Progress vs. Safety
e/acc vs. Safety advocates on risk tolerance

Efficiency vs. Equity
Corporates optimize; Dislocation demands redistribution

Sources & References

Key Texts

  • Altman, S. (2024). "The Intelligence Age." Personal Blog.
  • Andreessen, M. (2023). "The Techno-Optimist Manifesto." a16z.
  • Colby, E. (2021). The Strategy of Denial. Yale University Press.
  • Kissinger, H., Schmidt, E., & Huttenlocher, D. (2021). The Age of AI. Little, Brown.

Official Documents

  • Executive Order 14179 (Jan 2025). "Removing Barriers to American AI Leadership."
  • Executive Order 14319 (Apr 2025). "Modernizing AI Procurement."
  • Bletchley Declaration (Nov 2023). International AI Safety Summit.
  • Seoul Statement (May 2024). Frontier AI Safety Commitments.

Research & Analysis

  • IMF (2026). "AI and the Future of Work." World Economic Outlook.
  • Goldman Sachs (2023). "The Potentially Large Effect of AI on Economic Growth."
  • McKinsey Global Institute (2023). "The Economic Potential of Generative AI."

Media & Commentary

  • Karp, A. (2024). "The Software of Sovereignty." Palantir Blog.
  • Amodei, D. (2024). "Machines of Loving Grace." Anthropic Blog.
  • Various (2024). "SB 1047 Legislative History." California Legislature.