Thesis: Seven schools partition the 2026 landscape cleanly. Each has a distinct philosophical root, a core computational commitment, canonical methods, seminal papers, representative titans, and characteristic failure modes. Every lab, paper, and product can be located on this map, usually as a weighted combination of two or three schools.

  mindmap
    root((AI schools 2026))
      Symbolic/Rationalist
        Leibniz
        GOFAI
        Formal methods
        Imandra, Symbolica
      Connectionist/Scaling
        Hume, Locke
        Deep learning
        LLMs
        OpenAI, xAI
      Embodied/World Models
        Merleau-Ponty, Heidegger
        JEPA, Cosmos
        Meta, World Labs, Nvidia, Pi
      Neuro-Symbolic/Hybrid
        Kant
        System1+System2
        AlphaProof, DeepMind, Anthropic
      Evolutionary/Open-Ended
        Darwin, Stanley-Lehman
        Novelty search, QD
        Sakana, Clune
      Organismic/Active Inference
        Spinoza, Aristotle
        Free energy principle
        Friston, VERSES, GenBio
      Causal/Probabilistic
        Laplace, Bayes
        Pearl ladder
        Tenenbaum, Pearl
  

3.1 Symbolic / rationalist

  • Root: Leibniz's characteristica universalis; logical atomism; Simon–Newell physical symbol system hypothesis.
  • Thesis: Intelligence is symbol manipulation. Meaning is compositional. Reasoning is deduction.
  • Methods: First-order logic, theorem proving, expert systems, formal verification, program synthesis.
  • Canonical works: Newell & Simon's GPS and Logic Theorist (1956–1960); McCarthy's LISP; Cyc (Lenat, 1984–).
  • Titans: Judea Pearl (partly), Stuart Russell (partly), Gary Marcus (advocate).
  • 2026 instantiations: Imandra (ImandraX, CodeLogician); Symbolica (categorical deep learning); ExtensityAI (SymbolicAI); Lean-based proof systems used by AlphaProof.
  • Limitations: Brittle at scale; cannot learn from perception; frame/commonsense problem. Dreyfus's critique still bites.

3.2 Connectionist / statistical / scaling

  • Root: Humean empiricism — mind as bundle of associations; Lockean tabula rasa.
  • Thesis: Intelligence emerges from learning statistical regularities from massive data. Architectural simplicity + compute + data is sufficient.
  • Methods: Deep learning, transformers, next-token prediction, scaling laws (Kaplan 2020, Chinchilla 2022), RLHF.
  • Canonical works: Rumelhart, Hinton & Williams on backprop (1986); Krizhevsky AlexNet (2012); Vaswani et al. "Attention Is All You Need" (2017); Kaplan et al. "Scaling Laws" (2020); Hoffmann et al. "Chinchilla" (2022).
  • Titans: Geoffrey Hinton, Yann LeCun (historically), Yoshua Bengio, Richard Sutton (methodologically), Ilya Sutskever, Sam Altman, Dario Amodei, Jared Kaplan.
  • Seminal philosophical statement: Sutton's "The Bitter Lesson" (2019): "The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin."
  • 2026 instantiations: GPT-5, Claude Opus 4.5, Gemini 3, Grok 4, DeepSeek-V3.
  • Limitations: Humean induction problem; lives on Pearl's Rung 1; data-hungry; brittle out-of-distribution; interpretability gap.

3.3 Embodied / world models

  • Root: Merleau-Ponty and Heidegger on being-in-the-world; Gibson's ecological psychology.
  • Thesis: Intelligence is inseparable from a situated body in a physical world. 1D token streams cannot model 3D space, time, physics, and persistence.
  • Methods: Joint-embedding predictive architectures (JEPA), video world models, robot foundation models, spatial tokenizers.
  • Canonical works: LeCun, "A Path Towards Autonomous Machine Intelligence" (2022); V-JEPA 2 (2025, arXiv 2506.09985); Nvidia Cosmos (2501.03575); π-0 (Physical Intelligence, 2410.24164); Fei-Fei Li, "From Words to Worlds" (2025).
  • Titans: Yann LeCun, Fei-Fei Li, Sergey Levine, Chelsea Finn, Jensen Huang (industrial patron).
  • 2026 instantiations: Meta FAIR V-JEPA 2; World Labs (Marble, RTFM); Nvidia Cosmos + GR00T; Physical Intelligence π-0.5 and π*-0.6; DeepMind Genie 3.
  • Seminal philosophical statement: LeCun, 2024–2025 repeatedly: "LLMs are an off-ramp on the road to human-level AI."
  • Limitations: Currently far behind LLMs on text-heavy tasks; data scarcity for embodied experience; evaluation difficulty.

3.4 Neuro-symbolic / hybrid

  • Root: Kant's synthesis — innate categories + empirical data. Dual-process cognition (Kahneman, Thinking, Fast and Slow, 2011).
  • Thesis: Neither pure scaling nor pure symbol manipulation suffices. System 1 (pattern completion, neural) and System 2 (deliberation, symbolic/search) must be combined.
  • Methods: LLM + formal verifier; LLM + search (MCTS, beam, best-of-N); modular architectures; categorical deep learning; constitutional AI and deliberative alignment.
  • Canonical works: Lake, Ullman, Tenenbaum & Gershman, "Building Machines That Learn and Think Like People" (BBS 2017); AlphaGeometry (Trinh et al., Nature, Jan 2024); AlphaProof/AlphaGeometry 2 (DeepMind, July 2024; Nature paper Nov 2025); Bai et al., "Constitutional AI" (Anthropic, 2022).
  • Titans: Demis Hassabis, Joshua Tenenbaum, Gary Marcus, Dario Amodei, George Morgan (Symbolica), Petar Veličković.
  • 2026 instantiations: AlphaProof, AlphaGeometry 2, Gemini Deep Think (IMO gold 2025), OpenAI o3/o4 deliberative alignment, Imandra + LLM, Symbolica categorical DL.
  • Limitations: Integration seams; formalization bottleneck; when it works it looks like magic, when it doesn't it looks like kludge.

3.5 Evolutionary / open-ended

  • Root: Darwin; Stanley & Lehman, Why Greatness Cannot Be Planned (2015); complexity science.
  • Thesis: Objective-driven search is deceptive. True progress emerges from open-ended novelty search, quality-diversity, and population-based evolution. Intelligence is not optimized; it is accumulated.
  • Methods: Genetic algorithms, CMA-ES, novelty search, MAP-Elites quality-diversity (Mouret & Clune 2015), evolutionary model merging, AI-generating algorithms (Clune 2019).
  • Canonical works: Stanley & Lehman (2015); Mouret & Clune, "Illuminating Search Spaces by Mapping Elites" (2015); Akiba et al. "Evolutionary Optimization of Model Merging Recipes" (arXiv 2403.13187, 2024); Lu et al., "The AI Scientist" (2408.06292, 2024); DeepMind FunSearch (Nature, 2023) and AlphaEvolve (May 2025).
  • Titans: Kenneth Stanley, Joel Lehman, Jeff Clune, David Ha, Risto Miikkulainen.
  • 2026 instantiations: Sakana AI (evolutionary model merging, AI Scientist v2, Transformer², M2N2); DeepMind AlphaEvolve; quality-diversity ecosystems around Prime Intellect environments.
  • Limitations: Compute-hungry in a different way; evaluation is hard without clear fitness; still niche commercially.

3.6 Organismic / active inference / free energy

  • Root: Spinozist monism (one principle); Aristotelian hylomorphism; autopoiesis (Maturana & Varela); Helmholtz's "unconscious inference"; Friston's free energy principle.
  • Thesis: Intelligence — and perhaps life — is the minimization of variational free energy. Action, perception, and learning are three faces of one optimization. Agents are organisms, not function approximators.
  • Methods: Active inference, predictive coding, hierarchical Bayesian generative models, renormalizing generative models.
  • Canonical works: Friston, "The Free-Energy Principle: A Unified Brain Theory?" (Nat Rev Neurosci, 2010); Friston et al., Active Inference: The Free Energy Principle in Mind, Brain, and Behavior (MIT Press, 2022); Seth, Being You (2021); "From pixels to planning: scale-free active inference" (VERSES, 2024).
  • Titans: Karl Friston, Anil Seth, Andy Clark, Gabriel René.
  • 2026 instantiations: VERSES AI (Genius, AXIOM, RGMs); partially GenBio AI (organismic framing across biological scales); arguably any world-model-with-active-exploration architecture.
  • Limitations: Formal generality that critics say verges on unfalsifiability; engineering demonstrations lag theoretical claims; commercial scale-up unproven.

3.7 Causal / probabilistic

  • Root: Laplace, Bayes, Hume's problem of induction taken seriously; Pearl's structural causal models.
  • Thesis: Intelligence requires a ladder of inference — association, intervention, counterfactual. Without causal models, systems cannot generalize robustly, intervene correctly, or reason about what-if.
  • Methods: Bayesian networks, structural causal models, do-calculus, probabilistic programs, Bayesian program induction.
  • Canonical works: Pearl, Causality (2000); Pearl & Mackenzie, The Book of Why (2018); Pearl, "The Seven Tools of Causal Inference" (CACM, 2019); Lake et al. (2017); Tenenbaum et al., "How to grow a mind" (2011).
  • Titans: Judea Pearl, Josh Tenenbaum, Bernhard Schölkopf, Yoshua Bengio (partially, via System 2 and causal representation learning).
  • 2026 instantiations: Residual but crucial — causal representation learning, structured probabilistic programs, some aspects of GenBio and VERSES. Pearl himself has remained skeptical that LLMs genuinely ascend beyond Rung 1.
  • Limitations: Does not scale as cleanly as gradient descent; causal discovery from observational data is hard; has been absorbed partially by neuro-symbolic hybrids.

Summary table

SchoolRoot philosopherThesisSeminal 2020s paperLead labs 2026
SymbolicLeibnizIntelligence = symbol manipulationAlphaProof/Lean methodology (2024)Imandra, Symbolica, ExtensityAI
Connectionist/scalingHume, LockeScale + data = intelligenceSutton, Bitter Lesson (2019)OpenAI, xAI, DeepSeek
Embodied/world modelsMerleau-PontyCognition requires worldLeCun APTAMI (2022); V-JEPA 2 (2025)Meta FAIR, World Labs, Pi, Nvidia
Neuro-symbolicKantSystem 1 + System 2Lake et al. BBS (2017); AG2 (2024)DeepMind, Anthropic, Symbolica
EvolutionaryDarwin, Stanley-LehmanOpen-ended noveltyStanley & Lehman (2015); Sakana (2024)Sakana, DeepMind (AlphaEvolve)
Active inferenceSpinoza, FristonOne principle, minimize free energyFriston (2010); VERSES RGMs (2024)VERSES, partly GenBio
Causal/probabilisticLaplace, PearlAscend the causal ladderPearl CACM (2019)MIT BCS, distributed