The New Ideological Map of AI Power
Competing Frames and Worldviews
Two Questions That Precede Policy
Each frame answers two prior questions: what must not lose, and what is truly scarce. Once those answers differ, the same model can produce opposite policy instincts.
1. What is the unit that must not lose?
- State Power — National sovereignty and strategic advantage
- Civilization/Regime Order — Cultural and institutional continuity
- The Species — Human survival and flourishing
- The Firm/Platform — Corporate competitive advantage
- The Social Contract — Democratic legitimacy and welfare
2. What is truly scarce?
- Compute lead — Realists' focus on hardware advantage
- State capacity — Restorationists' bureaucratic reform
- Time/speed — e/acc's urgency imperative
- Capital + scale — Corporate Utopians' resource concentration
- Control/verification — Safety advocates' alignment focus
- Legitimacy — Dislocation frame's social cohesion concern
Six Competing Ideological Frames
The Realist Frame
AI as National Power Amplifier
Strategic Logic
The realist frame starts from anarchy, relative gains, and distrust of capabilities that cannot be inspected. Frontier AI matters because it can raise the effectiveness of intelligence analysis, targeting, cyber operations, logistics, industrial planning, surveillance, deception, and command support at the same time.
- 1 Relative advantage can compound. Better models improve planning; better planning improves procurement and fielding; fielding creates more data and institutional learning. The fear is a lead that becomes self-reinforcing.
- 2 Integration matters more than invention. A model lead evaporates if it is not absorbed into defense systems, intelligence cycles, acquisition, and the industrial base.
- 3 Opacity produces worst-case planning. Capability lives in code, weights, tacit know-how, and distributed compute. If it cannot be counted or inspected, rivals are treated as possibly ahead.
Policy Instruments
The policy grammar is build, deny, integrate, and align allies. Markets remain useful, but the frame assumes they are insufficient when the object is national survival and the rival may not accept reciprocal restraint.
Deny
Use chips, semiconductor tools, cloud pathways, and talent channels as chokepoints against adversarial frontier capability.
Build
Expand domestic and allied compute, energy, semiconductor capacity, talent pipelines, and defense adoption.
Genesis Mission
A state-led mobilization model for sovereign frontier capability, closer to strategic infrastructure than normal procurement.
Integrate
Move AI into command systems, force structure, procurement, intelligence workflows, and military operations.
Align allies
Treat export controls, standards, interoperability, and trusted compute access as alliance politics by other means.
Internal Contradictions
Private chokepoints create a sovereignty paradox. The state wants sovereign control over strategic capability, yet the control surfaces sit with labs, cloud providers, chip suppliers, and engineering labor markets that it does not fully command.
Denial is temporary and catalytic. Export controls can buy time, but they also generate smuggling, substitution, efficiency innovation, and rival industrial policy.
Security and openness pull against each other. Realists need talent, capital, and open innovation ecosystems, while also demanding research security, trusted channels, and tighter controls.
Safety is achieved through superiority: war-winning capability becomes the condition of deterrence.
Reading synthesis after Elbridge Colby's realist line of argumentVerification is the hard problem for AI arms-control analogies because model capability is harder to observe than deployed hardware.
Reading synthesis after Kissinger, Schmidt, and Huttenlocher, The Age of AI (2021)Key Proponents
- Elbridge Colby (Defense strategist)
- Jake Sullivan (National Security Advisor)
- NSC AI Task Force
Sources for Frame 1
Tech-Right / Restorationist
From Libertarian "Exit" to Restorationist "Capture"
From Exit to Capture
The tech-right frame is a coalition rather than a single doctrine. What holds it together is a diagnosis of Western incapacity: slow institutions, procedural veto points, managerial risk-aversion, cultural demoralization, and a state that no longer executes with coherence.
This is why it differs from realism. Realists ask how to prevent a rival from overtaking the state; restorationists ask why the West cannot use the power it already has. AI becomes a lever for command capacity: fewer intermediaries, faster decisions, stricter enforcement, and a state rebuilt around technical operators.
Build outside the state: crypto, network alternatives, private governance fantasies, and digital secession.
Take over, harden, and retool the state while cutting through federal, state, local, and bureaucratic vetoes.
The Enemy is Internal Weakness
For restorationists, the primary threat is internal institutional decay: bureaucratic sclerosis, cultural demoralization, ideological capture, and talent misallocation away from defense, energy, industrial production, and hard power.
Legitimacy is cultural and coercive. Alignment is redefined as loyalty to a national-civilizational order rather than technical harmlessness or procedural fairness.
AI is administrative force multiplication. The technology is valued for monitoring, standardizing enforcement, exposing bureaucratic drift, optimizing procurement, and bypassing discretion points where execution stalls.
The state should be strong, but selectively rebuilt. Restorationism is comfortable with hierarchy and a techno-elite steering layer, yet distrusts the bureaucracies through which state power normally operates.
Policy Instruments
DOGE Modernization Agenda
Administrative modernization treated as a software, staffing, and execution problem rather than a deliberative-governance problem.
"Woke AI" Executive Orders
Procurement becomes an alignment instrument: vendors must certify systems against engineered social agendas and divisive-concepts language.
Public-Private Fusion in Defense
Agile software firms are pulled into defense and security work as an alternative to legacy institutional inertia.
Palantir's "Software of Sovereignty"
The Palantir argument is that Silicon Valley has an affirmative obligation to serve Western security rather than retreat into consumer software or pacifism.
The software industry is framed as having an affirmative obligation to serve Western state capacity and security.
Reading synthesis after Karp and Zamiska, The Technological RepublicKey Figures & Organizations
- Alex Karp and Nicholas Zamiska
- Palantir and defense-software firms
- DOGE modernization agenda
- EO 14179 and EO 14319 procurement doctrine
Key Dates
Sources for Frame 2
The Acceleration Frame (e/acc)
Speed Saves Lives
Worldview
Accelerationism treats AI as a generalized force multiplier on discovery: a machine for shortening the distance between question and answer, design and implementation, problem and solution. Politics is judged by whether it speeds or blocks that unlocking.
The frame is morally loaded because it counts non-arrival as harm. Cures not discovered, services not cheapened, and competence not diffused are treated as real losses rather than hypothetical benefits.
Delay carries an actualization cost: foregone cures, education, productivity, and civilizational capacity.
AI is read as a scientific accelerator, competence diffuser, abundance engine, and civilizational escalator.
The decelerator is treated as culpable when precaution blocks benefits that could otherwise materialize.
Acceleration has risks, but stagnation guarantees continued disease, poverty, ignorance, and scarcity.
Political Economy Vision
The accelerationist wager is built on opportunity cost. If transformative systems are delayed, then the lost cures, productivity gains, and scientific discoveries count as real harm. That is why regulation is cast not as prudence, but as a moral and civilizational drag on abundance.
The strongest version of the frame also has a diffusion doctrine. Concentrated capability is treated as more dangerous than proliferation because it produces an AI priesthood: a small state-corporate class able to throttle access, set norms, and govern speech. Open proliferation is framed as resilience.
Permissionlessness
Remove licensing and pre-approval barriers so experimentation can outrun institutional blockage.
Abundance Dissolves Conflict
If AI makes services, cognition, and expertise radically cheaper, scarcity politics can be recoded as engineering problems.
Case Study: SB 1047 Fight
California's frontier AI safety bill mattered because it turned "deceleration" into a concrete institutional object: liability, safety duties, and pre-deployment obligations for frontier developers. For e/acc, the bill became evidence that safety language could entrench gatekeepers and slow the frontier.
Stress Tests for e/acc
"We believe that there is no material problem—whether created by nature or by technology—that cannot be solved with more technology."— Marc Andreessen, The Techno-Optimist Manifesto (2023)
Acceleration Logic
- Delay is treated as harm because foregone cures, learning, and abundance count as real losses.
- Risk is symmetrical: acceleration has risks, but stagnation guarantees continued suffering.
- Diffusion and open proliferation are preferred because concentrated capability creates an AI priesthood.
- Abundance is expected to dissolve distributional conflict by lowering the cost of services and competence.
Representative Text
Andreessen's The Techno-Optimist Manifesto crystallizes the abundance-through-technology argument.
Sources for Frame 3
Corporate Utopian Frame
AGI for Humanity
Three Core Arguments
Corporate utopianism is the dominant frame inside frontier labs and their capital ecosystems. It speaks in universal-benefit language: AGI for humanity, accelerated science, better medicine, education, productivity, and public services.
Where e/acc trusts diffusion, this frame trusts centralized stewardship. AGI-scale systems are treated as too capital-intensive, risky, and infrastructural for simple market diffusion, so a small number of mission-legitimated firms should build first, control tightly, and distribute benefits through managed channels.
Scale is Destiny
The fastest path to breakthroughs is scaling models, data, compute, talent, and energy. Only a few actors can mobilize that capital.
Containment Requires Control
Powerful systems should sit behind APIs, staged deployment, monitoring, and internal safety processes rather than open proliferation.
Governance is Engineering
Safety becomes evals, red-teaming, incident response, monitoring, and deployment controls managed like a high-risk industrial system.
The Infrastructure Trap
The economics of AI infrastructure create powerful centralizing forces. Training frontier models requires billions in compute, creating barriers that only the largest entities can cross. That produces the central tension of the frame: firms speak in universalist language about benefits for humanity, but the material structure of the stack keeps pushing capability, governance, and bargaining power into a very small number of actors.
A power and storage partnership that shows how frontier labs increasingly secure electrical capacity directly rather than treating energy as a background utility.
Managed Sovereignty
The newer corporate form is international. Frontier labs increasingly offer states a sovereignty bundle: local data residency, national startup support, training, compute access, and secure deployment, while the core weights, roadmap, and platform governance remain on corporate rails.
The result is a quasi-sovereign political economy. Labs negotiate land, power, water, security, and diplomatic acceptability; states gain capacity without full control; firms gain legitimacy and market access without surrendering the strategic core of the stack.
Corporate Utopian Fears
Uncontrolled Diffusion
Open weights weaken containment, enable misuse, and challenge the claim that responsible actors can steward the frontier safely.
State Overreach
Fragmented state rules can slow scaling and split markets, so firms prefer governance that formalizes their safety model without breaking their platform control.
"In the next couple of decades, we will be able to do things that would have seemed like magic to our grandparents."— Sam Altman, The Intelligence Age (2024)
Key Organizations
- OpenAI
- Anthropic
- Google DeepMind
- Microsoft AI
Key Figures
- Sam Altman (OpenAI)
- Dario Amodei (Anthropic)
- Demis Hassabis (DeepMind)
- Satya Nadella (Microsoft)
Sources for Frame 4
Safety / Scientific Internationalist
Transboundary risk, safety science, and verification under uncertainty
Core Beliefs
The safety frame treats frontier AI as a transboundary risk problem: closer to biosecurity, nuclear safety, or aviation than to ordinary software externalities. Its central concern extends from misuse to loss of control, misalignment, and catastrophic cascades under deep uncertainty.
The key claim is that capability may outrun evaluation and control. That pushes the frame toward thresholds, testing, shared incident reporting, and pre-emptive restraint rather than waiting for post hoc correction after deployment.
Frontier AI as Species-Level Hazard
Frontier AI is treated as a potential species-level hazard through misuse, loss of control, misalignment, and catastrophic cascades.
Alignment Problem Not Solved
Capability may rise faster than controllability; racing before this is solved is treated as a civilizational gamble.
Worst Harms are Irreversible
Low-probability harms dominate the calculus when the downside is extinction-level or civilization-level and cannot be repaired.
Races Degrade Safety
Competitive pressure rewards deployment before actors fully understand failure modes, creating capability externalities.
Risks are Transboundary
If catastrophe is the failure mode, national competition is the wrong lens. The frame emphasizes monitoring, shared science, and non-optional constraints.
Political Defeat (2023-2025)
Safety politics briefly tried to occupy the command heights of frontier AI: lab governance, state legislation, and federal executive capacity. The pattern across these defeats is that capital control, employee incentives, and growth politics repeatedly overrode the case for binding slowdown.
OpenAI Boardroom Coup
An internal governance attempt to slow commercialization was reversed once employees and Microsoft asserted institutional power.
SB 1047 Vetoed
Catastrophic-risk obligations were acknowledged but rejected as the wrong instrument and too economically threatening.
AISI to CAISI Reorganization
The U.S. mandate shifted from restriction toward standards and innovation, moving safety from executive restraint into a thinner standards layer.
Institutional Architecture
The frame did not disappear after these defeats. It relocated into an epistemic and operational architecture: international reports, safety-institute networks, shared evaluation science, summit diplomacy, and lab-internal risk processes.
Scientific Consensus
International AI Safety Report 2026: evidence base for frontier-risk evaluation.
UN Scientific Panel
Independent International Scientific Panel on AI; the U.S. vote marks decoupling from the global safety consensus.
Safety Institutes Network
Bletchley, Seoul, and the San Francisco network turn safety into diplomatic and evaluation infrastructure.
Current Locus
After the political defeats of 2023-2025, the safety frame relocates into scientific, bureaucratic, international, and lab-internal architecture.
Institutional Layer
- Yoshua Bengio-led International AI Safety Report 2026 expert process
- UN Independent International Scientific Panel on AI
- International Network of AI Safety Institutes
- UK and Japan AI Safety Institutes
- U.S. CAISI after the AISI reorganization
Key Documents
- Bletchley Declaration on frontier AI risks
- International AI Safety Report 2026
- Seoul Statement of Intent toward International Cooperation on AI Safety Science
- NIST San Francisco Convening of the International Network of AI Safety Institutes
- NIST Center for AI Standards and Innovation (CAISI)
- California SB 1047 bill status and legal analysis
Sources for Frame 5
The Dislocation Frame
Labor, Legitimacy, and Social Order
Core Concerns
The dislocation frame is less organized than the others, but it may be politically decisive because it speaks to domestic stability. Its starting point is economic redundancy and social disposability rather than species extinction.
The core issue is political economy. AI can decouple growth from labor at scale, shifting bargaining power toward capital, compute, data, and distribution channels. The legitimacy problem emerges when productivity gains become a distributional shock rather than a broad dividend.
Economic Redundancy
The fear is growth without labor: productivity rises while the social role and bargaining power of workers erode.
Legitimacy as Constraint
A state cannot pursue external rivalry if it cannot maintain domestic consent, fiscal capacity, and social stability.
Factor Share Shifts
Returns move toward capital owners and scarce inputs: compute, proprietary data, model access, distribution, and deployment channels.
Status Order Disruption
The fragile point is entry-level cognitive work, the ladder through which societies reproduce middle-class status and professional identity.
Recent Warning Signals
The key empirical tension is augmentation versus substitution. Adoption data may show gradual integration, but social perception can still anticipate sudden displacement. Wage compression, slower hiring, tighter monitoring, and the disappearance of formative tasks are enough to generate backlash even before mass unemployment appears.
Amodei publicly warned that entry-level white-collar roles could be hit within a one-to-five-year horizon; the issue is a legitimacy shock as well as a labor-market adjustment.
Dario Amodei, Axios interview / The Adolescence of TechnologyGeorgieva's Davos warning frames AI as a labor-market "tsunami," with particular risk for younger workers and entry-level pathways.
Kristalina Georgieva, IMF Managing Director, Davos 2026Policy Instincts
Taxation
Capture scarcity rents through windfall taxes, robot-tax proposals, profit sharing, or public claims on productivity gains.
UBI / UBS
Income guarantees and basic services are framed either as revolt prevention or as dignity-preserving transition policy.
Labor Innovation
Sectoral bargaining, data rights, algorithmic unions, and new institutions for work mediated by AI systems.
Human-in-the-Loop
Visible human accountability in sensitive domains protects legitimacy even when automation is technically feasible.
Reference Points
- Dario Amodei and frontier-lab warnings
- Kristalina Georgieva and IMF labor-market exposure
- TIME's The People vs. AI backlash frame
- Labor actors and anti-corporate conservatives
- Young workers and entry-level white-collar pathways
Political-Economy Mechanisms
Growth Without Labor
AI can turn productivity gains into a distributional shock when growth is decoupled from work.
Capital and Scarcity Rents
Returns shift toward capital, compute, data, and distribution channels unless institutions counteract them.
Credential Ladder Shock
The sensitive vector is the collapse of entry-level cognitive work as a route into middle-class status.
Related Concepts
Sources for Frame 6
Comparative Framework Analysis
| Dimension | Realist | Restorationist | e/acc | Corporate | Safety | Dislocation |
|---|---|---|---|---|---|---|
| Unit to Protect | State | Civilization | Progress | Humanity | Species | Workers |
| Scarce Resource | Compute lead | State capacity | Time | Capital | Control | Legitimacy |
| Primary Threat | Rival states | Bureaucracy | Stagnation | Misuse | Misalignment | Inequality |
| Policy Stance | Compete | Reform | Accelerate | Scale | Regulate | Redistribute |
| International View | Alliances | Hegemony | Borderless | Markets | Cooperation | Varies |
Coalition Network Map
National Security Coalition
Tech Acceleration Coalition
Governance Coalition
Key Tension Lines
Speed vs. Control
Realists want controlled advantage; e/acc wants maximum velocity
State vs. Market
Restorationists want state capacity; Corporates want private control
Progress vs. Safety
e/acc vs. Safety advocates on risk tolerance
Efficiency vs. Equity
Corporates optimize; Dislocation demands redistribution