Thesis: The twentieth century added five moves to the inherited frame: formalization (logic as the language of thought), computation (mind as machine), phenomenology (intelligence as embodied skill), systems (intelligence as feedback), and dialectics (intelligence as productive conflict). All five are load-bearing in 2026.

2.1 Logical atomism and the linguistic turn

Bertrand Russell and the early Ludwig Wittgenstein (Tractatus Logico-Philosophicus, 1921) argued that reality has a logical structure mirrored by language. Complex propositions decompose into atomic ones; atomic propositions picture atomic facts. This is Leibniz plus Frege plus set theory — and it is the philosophical soil in which GOFAI (Good Old-Fashioned AI) germinated. Herbert Simon and Allen Newell's Physical Symbol System Hypothesis (1976) — "a physical symbol system has the necessary and sufficient means for general intelligent action" — is a direct descendant.

The later Wittgenstein (Philosophical Investigations, 1953) repudiated his own earlier view: meaning is not picture-to-fact but use in a form of life. Language games have no single essence — just family resemblance. This pivot seeded the skeptical tradition about symbolic AI.

2.2 Turing, functionalism, and the computational theory of mind

Alan Turing's "Computing Machinery and Intelligence" (Mind, 1950) replaced the question "can machines think?" with the imitation game — an operational, behavioral test. More importantly, his 1936 work on Turing machines established that computation is substrate-independent: any universal computer can, in principle, simulate any other.

This gave rise to functionalism: mental states are defined by their functional/computational role, not their physical realization. Hilary Putnam, Jerry Fodor, and David Marr are the canonical expositors. The computational theory of mind (CTM) claims the mind is a computer — specifically, a symbol-manipulating engine. Every substrate-independence argument in modern AI consciousness debates (Butlin & Long, 2023) sits on this foundation.

2.3 Phenomenology and the Dreyfus critique

Edmund Husserl, Martin Heidegger, and Maurice Merleau-Ponty built phenomenology — the systematic study of first-person experience. Heidegger's Being and Time (1927) introduced being-in-the-world: we are not detached minds representing a world but absorbed agents coping with it. Merleau-Ponty's Phenomenology of Perception (1945) grounded cognition in the lived body.

Hubert Dreyfus, building on this tradition, wrote What Computers Can't Do (1972) and What Computers Still Can't Do (1992) — the most influential philosophical critique of symbolic AI. His argument: human intelligence is not rule-following but skilled coping. Experts do not consult rules; they respond to situations. The frame problem, the context problem, the commonsense problem all reduce to this.

Dreyfus was dismissed in the 1970s and vindicated in the 2010s: connectionism and embodied AI are broadly Dreyfusian. Every argument for embodiment in robotics — Physical Intelligence's π-series, Nvidia GR00T, the whole embodied-AI case — carries phenomenological DNA. So does Fei-Fei Li's spatial intelligence thesis: cognition requires a situated, 3D, persistent world, not a 1D token stream.

2.4 Cybernetics, systems thinking, and complexity

Norbert Wiener's Cybernetics (1948) launched the study of feedback and control in animals and machines. The core idea — a system with a goal, a sensor, and a feedback loop is the minimal unit of purposive behavior — underwrites RL, control theory, and active inference. Wiener also raised the first alignment concerns: "We had better be quite sure that the purpose put into the machine is the purpose which we really desire."

Ludwig von Bertalanffy (General System Theory, 1968), Gregory Bateson (Steps to an Ecology of Mind, 1972), Donella Meadows (Thinking in Systems, 2008), Peter Senge (The Fifth Discipline, 1990), and Niklas Luhmann (autopoietic social systems) extended this into a general epistemology: systems have properties — emergence, non-linearity, feedback, self-reference — irreducible to their parts.

Complexity science (Santa Fe Institute — Kauffman, Holland, Mitchell, Wolfram) added emergence and complex adaptive systems: simple local rules produce globally structured behavior. This is the philosophical ground of scaling-law emergence claims ("capabilities emerge at scale"), of evolutionary methods (Sakana, novelty search), and of the whole "AI as ecosystem" framing.

2.5 Process philosophy

Alfred North Whitehead's Process and Reality (1929) argued reality is not made of things but of events — "actual occasions" of experience that arise and perish. Substance is a stable pattern of process. This is the philosophical ancestor of dynamical-systems views of cognition, of Liquid AI's continuous-time neural networks, and of the organismic view at GenBio AI: a cell is not a bag of molecules but an ongoing process.

2.6 Dialectics and adversarial methods

Hegel's dialectic — thesis, antithesis, synthesis — and Marx's materialist adaptation gave us a model of productive conflict: truth emerges through opposition. Generative Adversarial Networks (Goodfellow et al., 2014) are dialectic in pure form — generator and discriminator locked in antagonism that produces synthesis. RLHF (reward model vs. policy), Constitutional AI (critic and reviser), debate-based alignment (Irving et al., 2018), and AlphaZero's self-play are all dialectical. "Adversarial robustness" is the Hegelian fingerprint on modern ML.