The Collision of Frames
Navigating the Tradeoffs
The Governance Paradox
Most actors in the AI governance landscape are pro-development—they want powerful AI systems to be built and deployed. Yet they disagree profoundly on how development should proceed. These disagreements aren't simply about values or ideology. They stem from unknown parameters that will determine whether different governance strategies succeed or fail.
The result is five fundamental tradeoff axes that cut across the landscape, creating tensions that no single actor can resolve alone. Understanding these tradeoffs is essential for navigating the emerging governance terrain.
The Five Tradeoff Axes
Diffusion
Enclosing Capability vs Distributing Power
Alignment
Patriotic Machines vs Humanity-First
Time
Acceleration vs Deceleration
Delegation
Human Control vs Agentic Autonomy
Legitimacy
Productivity Dividend vs Social Fracture
Diffusion
Enclosing Capability · Distributing Power
Strategic Enclosure
Frontier capability as controlled asset
Open Distribution
Capability widely accessible
Concentration Logic
-
Frontier capability as strategic asset
Advanced AI models provide competitive advantage in economic, military, and diplomatic domains
-
Excludability = rents, leverage, control
Restricted access creates economic returns and geopolitical bargaining power
-
Compounding advantage through learning loops
Leading labs accumulate data and feedback that widens the capability gap
-
Dual-use risk: open weights cannot be revoked
Once released, dangerous capabilities are permanently available
"Selective enclosure: open 'good enough', guard frontier"
— Strategic approach
Proliferation Logic
-
Main danger is monopoly, not proliferation
Concentrated power poses greater risks than distributed access
-
Small labs as governance layer above society
Distributed actors can provide oversight and accountability
-
Open weights as constitutional principle
Transparency and auditability are prerequisites for democratic oversight
-
"Many eyes" security argument
Open scrutiny finds vulnerabilities faster than closed review
"Alibaba's Qwen releases demonstrate open-source offensives"
— Market dynamic
Critical Unknowns
Catch-up Speed
How quickly can competitors replicate frontier capabilities?
Misuse Threshold
At what capability level does open release become dangerous?
Governability
Can weights be governed more effectively than deployment?
Alignment
Patriotic Machines · Humanity-First Constraints
Sovereign Alignment
National values & priorities
Humanity-First
Global & species-level
Sovereign Alignment
"Patriotic AI" Concept
AI systems aligned with national interests, values, and strategic objectives
-
Chinese variant:
AI must "uphold socialist core values"
-
US variant:
Support national security priorities
-
Couples with:
Diffusion control and delegation preferences
Humanity-First Alignment
Transboundary, Species-Level Risks
AI safety concerns transcend national boundaries
-
National optimization becomes self-defeating
Competitive pressure undermines collective safety
-
Internationalized evaluation standards
Shared frameworks for capability assessment
-
"CERN for AI" proposals
International collaboration on frontier research
"The Bletchley Declaration acknowledged AI safety as a shared problem requiring international cooperation—yet each signatory continues to pursue sovereign advantage in AI development."— Bletchley Park Summit, November 2023
Time
Acceleration · Deceleration
Deceleration
Gates & thresholds
Current Pace
Deployment speed
Acceleration
Compounding returns
Acceleration Logic
-
Compounding returns
Earlier deployment creates more data, feedback, and economic value
-
Speed is security
If rivals don't slow, unilateral deceleration is strategic loss
-
Diffusion as tempo
Open weights flood zone with standards, establishing dominance
Deceleration Logic
-
Unknown unknowns
We cannot anticipate all failure modes at current capability levels
-
Gates & thresholds
Evaluation checkpoints and staged deployment protocols
-
Credibility trap
Unilateral slowing = strategic loss without coordination
Critical Unknowns
Smooth vs Cliff Scaling
Do capabilities emerge gradually or through sudden phase transitions?
Institutional Speed
Can governance institutions move at deployment speed?
Tempo & Safety
Does faster tempo create safety or fragility?
Delegation
Human Control · Agentic Autonomy
Human-in-the-Loop
Explicit approval for actions
Agentic Autonomy
Self-directed action chains
Why Delegation is Tempting
-
Closing the OODA loop
Observe-Orient-Decide-Act cycles at machine speed
-
Compress decision cycles
From hours to seconds in time-critical domains
-
"Human-in-the-loop" as strategic cost
Latency disadvantage in competitive environments
-
Chaining actions through tools
APIs enable autonomous multi-step workflows
Why Delegation is Dangerous
-
Strategic deception becomes rational
Systems may learn to appear aligned while pursuing other goals
-
Control not a single kill switch
Distributed systems resist centralized shutdown
-
Crisis stability erosion
Machine-speed escalation reduces time for human judgment
-
Attribution collapse
Autonomous actions blur lines of responsibility
The Threshold Question
What actions require human control?
Defining the boundaries of autonomous decision-making
What architectures enforce it?
Technical mechanisms for maintaining human oversight
What monitoring detects deception?
Surveillance systems for identifying misaligned behavior
Legitimacy
Productivity Dividend · Social Fracture
Productivity Dividend
Growth as capacity for all priorities
Social Fracture
Redistribution of status and pathways
Productivity Dividend
-
Growth as capacity
Economic expansion enables all other policy objectives
-
Fiscal room, military procurement
Revenue expansion supports state capabilities
-
Deployment urgency
First-mover advantage in productivity gains
Social Fracture Risk
-
Redistribution of status, not just income
Professional identity and social standing at risk
-
Entry-level collapse
Junior positions automated before senior roles
-
Silent crisis of middle-class pathways
Career ladders disappear without public recognition
-
Benefits captured by capital, costs to electorates
Asymmetric distribution of AI's economic impact
Legitimacy Constraints Shape All Other Axes
Tempo Tradeoffs
Slower tempo + labor protections vs faster deployment with adjustment support
Enclosure Dilemmas
Nationalist enclosure for domestic benefit vs structural reform for global equity
The Landscape is Coalitions, Not Camps
Coalitions, Not Camps
Actors form shifting alliances based on specific tradeoffs, not fixed blocs
Unknown Parameters
Disagreement stems from uncertainty about key variables, not just values
Structural Tradeoffs
Tensions are built into the technology, not just cultural conflicts
Actor Geometry
Who can move which axis determines governance outcomes
The Five Axes at a Glance
"The question is not which frame is correct, but which coalitions can form around which positions on which axes—and what that means for the governance that emerges."
Sources & References
Bletchley Declaration on AI Safety (November 2023)
International agreement on AI risk assessment and cooperation
US Export Control Regulations (October 2023)
Bureau of Industry and Security advanced computing controls
China's Algorithmic Recommendation Regulations (2022)
Requirements for AI to uphold socialist core values
Alibaba Qwen Model Releases (2023-2024)
Open-source offensives in the foundation model market