Preface

Every architecture is a bet about what intelligence is. A transformer is not philosophically neutral; it is an empiricist machine — a Humean bundle of statistical associations. A world model is a neo-Kantian machine — it builds structure before experience. Active inference is Spinozist — one substance, one principle. Formal verification is Leibnizian — a characteristica universalis for the twenty-first century.

When labs disagree about AGI — Sutton vs. LeCun, Altman vs. Hassabis, Amodei vs. Marcus — they are re-enacting arguments from the seventeenth and eighteenth centuries under new names. This primer gives you the map. Once you have it, every paper, launch, and lab becomes legible as a philosophical commitment expressed in CUDA.

The durable axis is simple: reductionism vs. holism, tabula rasa vs. innate structure, scale vs. structure. Everything else is commentary.

  timeline
      title Philosophical lineage to frontier AI
      1637 : Descartes : Method of doubt, dualism, rationalism
      1677 : Spinoza : Monism, one substance
      1714 : Leibniz : Characteristica universalis, symbolic calculus
      1739 : Hume : Bundle theory, induction, empiricism
      1781 : Kant : Synthesis, innate categories + empirical data
      1807 : Hegel : Dialectic, thesis-antithesis-synthesis
      1929 : Whitehead : Process philosophy
      1936 : Turing : Computation, functionalism
      1943 : McCulloch-Pitts : Neural net as logic
      1948 : Wiener : Cybernetics, feedback
      1962 : Kuhn : Paradigm shifts
      1972 : Dreyfus : Phenomenological critique
      1986 : Rumelhart-Hinton : Backprop, PDP
      2010 : Friston : Free energy principle
      2017 : Vaswani : Attention is all you need
      2019 : Sutton : Bitter lesson
      2022 : LeCun : JEPA, world models
      2024 : Silver-Sutton : Era of experience
      2025 : Gemini Deep Think : IMO gold, hybrid reasoning
      2026 : Convergence : Scaling + world models + symbolic + alignment
  

How to Use This Primer

This survey is designed as a primer, not merely as an archive of claims. It can be read linearly from Part I to Part VIII, but it also supports selective reading: one path through the history of ideas, another through present technical schools and labs, and another through the concrete engineering pipeline where philosophical assumptions become system design.

Suggested route

Historical path

Start with the classical philosophical vocabulary, move through twentieth-century reformulations, and end with the synthetic conclusion.

Suggested route

Frontier lab path

Use this route if your main interest is the present competitive landscape: schools, live disputes, labs, and the reasoning frontier.

Suggested route

Systems path

Follow this path if you want to connect philosophical commitments to training objectives, architecture, reasoning loops, and deployment constraints.


Recurring Axes

Reason and experience

Do systems need prior structure, or can broad competence emerge from enough data and optimization?

Reductionism and holism

Should intelligence be explained by decomposing parts, or by analyzing agent-world organization and feedback?

Scale and structure

When do more data and compute suffice, and when do architecture, search, memory, or verifiers become necessary?

Representation and embodiment

Is language-like representation enough, or do intelligence and robustness require persistent world models and physical coupling?


Survey Structure

I 5 sections

Philosophical foundations

Modern AI debates recapitulate the rationalist–empiricist split, with Kant standing (unacknowledged) behind every hybrid architecture. The seventeenth- and eighteenth-century arguments are not historical curiosities — they are the load-bearing frames of contemporary ML.

II 6 sections

20th-century pivots

The twentieth century added five moves to the inherited frame: formalization (logic as the language of thought), computation (mind as machine), phenomenology (intelligence as embodied skill), systems (intelligence as feedback), and dialectics (intelligence as productive conflict). All five are load-bearing in 2026.

III 7 sections

A MECE map of AI/ML schools in 2026

Seven schools partition the 2026 landscape cleanly. Each has a distinct philosophical root, a core computational commitment, canonical methods, seminal papers, representative titans, and characteristic failure modes. Every lab, paper, and product can be located on this map, usually as a weighted combination of two or three schools.

IV 5 sections

The great debates

Five fault lines define the 2026 conversation. Each is a classical philosophical dispute re-enacted in ML terms, and each has become empirically testable in a way the classical versions were not.

V 14 sections

Lab landscape 2026

Every lab is a placed bet. You can predict most of what a lab will ship in the next twelve months from its philosophical posture — and the posture is readable from founder statements, flagship papers, and hardware choices.

VI 4 sections

The training pipeline as philosophical injection point

Every school intervenes at a specific stage of the modern training pipeline. Reading a training recipe is reading a philosophical argument.

VII 5 sections

Reasoning, consciousness, and the hardware substrate

Three hard problems remain: reasoning (solvable, partly solved), consciousness (open and urgent), and hardware (an energy and architecture crisis that may determine everything else).

VIII 5 sections

The grand synthesis

No single school wins alone. The convergence pattern of 2025–2026 is a hybrid: scaling (connectionist base) + world models (embodied structure) + verifiable reasoning (symbolic scaffold) + alignment (dialectic constitution) + new substrates (organismic hardware). Every frontier lab is now converging toward some weighted combination. The durable disagreements are about weights, not kinds.


Supporting Materials