← Back to Hub Back to Hub
Week 2 · Part 4

The Collision of Frames

Navigating the Tradeoffs

The Governance Paradox

Most actors in the AI governance landscape are pro-development—they want powerful AI systems to be built and deployed. Yet they disagree profoundly on how development should proceed. These disagreements aren't simply about values or ideology. They stem from unknown parameters that will determine whether different governance strategies succeed or fail.

The result is five fundamental tradeoff axes that cut across the landscape, creating tensions that no single actor can resolve alone. Understanding these tradeoffs is essential for navigating the emerging governance terrain.

The Five Tradeoff Axes

Axis 1

Diffusion

Enclosing Capability vs Distributing Power

Strategic asset vs open access
Axis 2

Alignment

Patriotic Machines vs Humanity-First

National vs global constraints
Axis 3

Time

Acceleration vs Deceleration

Speed as security vs caution
Axis 4

Delegation

Human Control vs Agentic Autonomy

OODA loops vs safety
Axis 5

Legitimacy

Productivity Dividend vs Social Fracture

Growth capacity vs distributional conflict
1
Tradeoff Axis

Diffusion

Enclosing Capability · Distributing Power

Concentration Logic

Strategic Enclosure

Frontier capability as controlled asset

Proliferation Logic

Open Distribution

Capability widely accessible

Concentration Logic

  • Frontier capability as strategic asset

    Advanced AI models provide competitive advantage in economic, military, and diplomatic domains

  • Excludability = rents, leverage, control

    Restricted access creates economic returns and geopolitical bargaining power

  • Compounding advantage through learning loops

    Leading labs accumulate data and feedback that widens the capability gap

  • Dual-use risk: open weights cannot be revoked

    Once released, dangerous capabilities are permanently available

"Selective enclosure: open 'good enough', guard frontier"

— Strategic approach

Proliferation Logic

  • Main danger is monopoly, not proliferation

    Concentrated power poses greater risks than distributed access

  • Small labs as governance layer above society

    Distributed actors can provide oversight and accountability

  • Open weights as constitutional principle

    Transparency and auditability are prerequisites for democratic oversight

  • "Many eyes" security argument

    Open scrutiny finds vulnerabilities faster than closed review

"Alibaba's Qwen releases demonstrate open-source offensives"

— Market dynamic

Critical Unknowns

Catch-up Speed

How quickly can competitors replicate frontier capabilities?

Misuse Threshold

At what capability level does open release become dangerous?

Governability

Can weights be governed more effectively than deployment?

2
Tradeoff Axis

Alignment

Patriotic Machines · Humanity-First Constraints

Sovereign Alignment

National values & priorities

Humanity-First

Global & species-level

Sovereign Alignment

"Patriotic AI" Concept

AI systems aligned with national interests, values, and strategic objectives

  • Chinese variant:

    AI must "uphold socialist core values"

  • US variant:

    Support national security priorities

  • Couples with:

    Diffusion control and delegation preferences

Humanity-First Alignment

Transboundary, Species-Level Risks

AI safety concerns transcend national boundaries

  • National optimization becomes self-defeating

    Competitive pressure undermines collective safety

  • Internationalized evaluation standards

    Shared frameworks for capability assessment

  • "CERN for AI" proposals

    International collaboration on frontier research

"The Bletchley Declaration acknowledged AI safety as a shared problem requiring international cooperation—yet each signatory continues to pursue sovereign advantage in AI development."
— Bletchley Park Summit, November 2023
3
Tradeoff Axis

Time

Acceleration · Deceleration

Deceleration

Gates & thresholds

Current Pace

Deployment speed

Acceleration

Compounding returns

Acceleration Logic

  • Compounding returns

    Earlier deployment creates more data, feedback, and economic value

  • Speed is security

    If rivals don't slow, unilateral deceleration is strategic loss

  • Diffusion as tempo

    Open weights flood zone with standards, establishing dominance

Deceleration Logic

  • Unknown unknowns

    We cannot anticipate all failure modes at current capability levels

  • Gates & thresholds

    Evaluation checkpoints and staged deployment protocols

  • Credibility trap

    Unilateral slowing = strategic loss without coordination

Critical Unknowns

Smooth vs Cliff Scaling

Do capabilities emerge gradually or through sudden phase transitions?

Institutional Speed

Can governance institutions move at deployment speed?

Tempo & Safety

Does faster tempo create safety or fragility?

4
Tradeoff Axis

Delegation

Human Control · Agentic Autonomy

Human-in-the-Loop

Explicit approval for actions

Delegation Spectrum

Agentic Autonomy

Self-directed action chains

Why Delegation is Tempting

  • Closing the OODA loop

    Observe-Orient-Decide-Act cycles at machine speed

  • Compress decision cycles

    From hours to seconds in time-critical domains

  • "Human-in-the-loop" as strategic cost

    Latency disadvantage in competitive environments

  • Chaining actions through tools

    APIs enable autonomous multi-step workflows

Why Delegation is Dangerous

  • Strategic deception becomes rational

    Systems may learn to appear aligned while pursuing other goals

  • Control not a single kill switch

    Distributed systems resist centralized shutdown

  • Crisis stability erosion

    Machine-speed escalation reduces time for human judgment

  • Attribution collapse

    Autonomous actions blur lines of responsibility

The Threshold Question

What actions require human control?

Defining the boundaries of autonomous decision-making

What architectures enforce it?

Technical mechanisms for maintaining human oversight

What monitoring detects deception?

Surveillance systems for identifying misaligned behavior

5
Tradeoff Axis

Legitimacy

Productivity Dividend · Social Fracture

Productivity Dividend

Growth as capacity for all priorities

Social Fracture

Redistribution of status and pathways

Productivity Dividend

  • Growth as capacity

    Economic expansion enables all other policy objectives

  • Fiscal room, military procurement

    Revenue expansion supports state capabilities

  • Deployment urgency

    First-mover advantage in productivity gains

Social Fracture Risk

  • Redistribution of status, not just income

    Professional identity and social standing at risk

  • Entry-level collapse

    Junior positions automated before senior roles

  • Silent crisis of middle-class pathways

    Career ladders disappear without public recognition

  • Benefits captured by capital, costs to electorates

    Asymmetric distribution of AI's economic impact

Legitimacy Constraints Shape All Other Axes

Tempo Tradeoffs

Slower tempo + labor protections vs faster deployment with adjustment support

Enclosure Dilemmas

Nationalist enclosure for domestic benefit vs structural reform for global equity

Conclusion

The Landscape is Coalitions, Not Camps

Coalitions, Not Camps

Actors form shifting alliances based on specific tradeoffs, not fixed blocs

Unknown Parameters

Disagreement stems from uncertainty about key variables, not just values

Structural Tradeoffs

Tensions are built into the technology, not just cultural conflicts

Actor Geometry

Who can move which axis determines governance outcomes

The Five Axes at a Glance

Diffusion Axis 1
Enclosure
Proliferation
Alignment Axis 2
Patriotic
Humanity-First
Time Axis 3
Deceleration
Acceleration
Delegation Axis 4
Human Control
Agentic Autonomy
Legitimacy Axis 5
Productivity
Social Fracture
"The question is not which frame is correct, but which coalitions can form around which positions on which axes—and what that means for the governance that emerges."
Understanding the AI Governance Landscape

Sources & References

Bletchley Declaration on AI Safety (November 2023)

International agreement on AI risk assessment and cooperation

US Export Control Regulations (October 2023)

Bureau of Industry and Security advanced computing controls

China's Algorithmic Recommendation Regulations (2022)

Requirements for AI to uphold socialist core values

Alibaba Qwen Model Releases (2023-2024)

Open-source offensives in the foundation model market