Thesis: Every lab is a placed bet. You can predict most of what a lab will ship in the next twelve months from its philosophical posture — and the posture is readable from founder statements, flagship papers, and hardware choices.

  graph LR
      A[Scaling bet] --> OpenAI
      A --> xAI
      B[Scaling + safety] --> Anthropic
      C[Scaling + scaffold] --> DeepMind
      D[World models] --> Meta_FAIR[Meta FAIR]
      D --> WorldLabs[World Labs]
      D --> Nvidia
      D --> PhysicalIntelligence[Physical Intelligence]
      E[Neuro-symbolic] --> Symbolica
      E --> Imandra
      E --> ExtensityAI
      F[Evolutionary] --> Sakana
      G[Active inference] --> VERSES
      H[Organismic biology] --> GenBio[GenBio AI]
      I[Alignment-first] --> SSI[Safe Superintelligence]
      J[Alternative substrate] --> Liquid[Liquid AI]
      J --> Extropic
      J --> Rain[Rain AI]
  

5.1 OpenAI — pure scaling, now with reasoning

  • Key figure: Sam Altman (CEO); Greg Brockman; historically Ilya Sutskever.
  • Philosophical stance: Scaling hypothesis maximalist with reasoning overlay. Scaling laws "still hold"; compute is destiny.
  • Flagship 2025–2026: GPT-5 (August 7, 2025) — unified system with a real-time router between fast and thinking modes; GPT-5 Pro/mini/nano variants; o-series (o1 September 2024; o3 December 2024; o4 2025); Stargate infrastructure partnership (OpenAI/SoftBank, 10-GW-class data centers); $10B Cerebras compute deal (2026–2028). SOTA on AIME 2025 (94.6%), SWE-bench Verified (74.9%).
  • Signature quote: Altman, "You should expect OpenAI to spend trillions of dollars on data center construction in the not very distant future."

5.2 xAI — compute maximalism

  • Key figure: Elon Musk; Igor Babuschkin (historically).
  • Philosophical stance: Pure scaling plus "truth-seeking" framing. Vertically integrated — Tesla Megapacks, SpaceX-velocity hardware, X/Twitter data.
  • Flagship: Grok 3 (early 2025); Grok 4 (~200M H100-hours, 15× Grok 2); Grok 5 training (late 2025). Colossus (Memphis, 100K→200K H100s, built in 122 days); Colossus 2 (Southaven, MS, January 2026) — "first gigawatt training cluster in the world"; roadmap to 1M+ GPUs.
  • Philosophical fingerprint: Belief that scale alone, applied aggressively enough, produces AGI — reductionism at industrial scale.

5.3 Nvidia — "Physical AI"

  • Key figure: Jensen Huang; Bill Dally; Sanja Fidler; Jim Fan.
  • Philosophical stance: Embodied world models as the next scaling frontier. LLMs solved language; now solve space.
  • Flagship: Cosmos (CES January 2025, arXiv 2501.03575) — open world foundation models for robots and AVs; expanded at GTC March 2025 with a reasoning WFM. Project GR00T (humanoid robot foundation model); GR00T Blueprint for synthetic data via Omniverse + Cosmos Transfer. Customers: 1X, Figure, Agility, XPENG, Waabi.
  • Signature quote: Huang, "Just as LLMs revolutionized generative and agentic AI, Cosmos world foundation models are a breakthrough for physical AI."

5.4 Meta FAIR — JEPA and the LeCun agenda

  • Key figure: Yann LeCun (until November 2025 departure); FAIR team now led by Joelle Pineau and others.
  • Philosophical stance: Non-generative world models via joint-embedding prediction. LLMs are an off-ramp.
  • Flagship: V-JEPA 2 (June 11, 2025, arXiv 2506.09985) — 1.2B-parameter video world model trained on 1M+ hours; zero-shot robot planning. Preceded by I-JEPA (2023) and V-JEPA (2024).
  • 2025 rupture: LeCun announced in November 2025 he is leaving to start a JEPA-focused company, reportedly with Meta licensing access. One of the highest-profile philosophical secessions in AI history.

5.5 World Labs — spatial intelligence

  • Key figure: Fei-Fei Li; Justin Johnson; Ben Mildenhall (NeRF); Christoph Lassner.
  • Philosophical stance: Spatial intelligence — 3D/4D world models are a distinct problem from language modeling.
  • Flagship: Launched September 2024 with $230M; Marble (November 12, 2025) first commercial product — persistent, editable 3D world generation from text/image/video/panorama; Chisel hybrid 3D editor separating structure from style; RTFM (Real-Time Frame Model) with spatial memory.
  • Signature text: Li, "From Words to Worlds" (November 17, 2025): "The dimensionality of representing a world is vastly more complex than that of a one-dimensional, sequential signal like language."

5.6 Physical Intelligence — embodied foundation models

  • Key figure: Sergey Levine; Chelsea Finn; Karol Hausman.
  • Philosophical stance: Generalist robot foundation models trained on heterogeneous physical experience — Merleau-Ponty plus VLMs.
  • Flagship: π-0 (October 2024, arXiv 2410.24164) — VLA flow-matching atop PaliGemma; 10,000+ hours across 7 robot platforms. π-0.5 (April 2025, arXiv 2504.16054) — open-world generalization to unfamiliar kitchens/bedrooms. π-0.6* (November 2025, arXiv 2511.14759) — "a VLA that Learns from Experience," RL on generalist policies. Open-source via openpi.
  • Signature claim: "To build AI systems with the kind of physically situated versatility that people possess, we need to make AI systems embodied."

5.7 Google DeepMind — scaffolded hybrids

  • Key figure: Demis Hassabis; Shane Legg; Koray Kavukcuoglu.
  • Philosophical stance: Kantian synthesis operationalized — scale plus search plus specialized verifiers plus world models.
  • Flagship: Gemini 3 Pro (November 2025); Gemini Deep Think (IMO gold July 2025, ICPC gold 2025); AlphaProof + AlphaGeometry 2 (IMO silver July 2024, Nature paper November 2025); AlphaEvolve (May 2025) — Gemini-powered evolutionary coding agent that discovered a new 4×4 matrix-multiplication algorithm improving on Strassen 1969 and sped up Gemini's own training; Genie 3 (August 2025) — real-time 720p/24fps interactive world model; AlphaFold 3 (Nature, 2024).
  • Philosophical fingerprint: The broadest hedge in the industry — scaling, reasoning, symbolic, evolutionary, world models, all co-developed.

5.8 Anthropic — safety-first scaling plus interpretability

  • Key figure: Dario and Daniela Amodei; Chris Olah (interpretability); Jan Leike (alignment).
  • Philosophical stance: Scaling works but is dangerous; pair capability with Constitutional AI and mechanistic interpretability.
  • Flagship: Claude 4 Opus & Sonnet (May 2025) with Model Context Protocol and Claude Code GA; Claude 4.5 family (late 2025, Opus 4.5 hit 80.9% on SWE-bench Verified). Constitutional AI (Bai et al., arXiv 2212.08073, December 2022) expanded through 2025–2026; interpretability (Toy Models of Superposition 2022; Towards Monosemanticity October 2023; Scaling Monosemanticity May 2024 with Golden Gate Claude; Circuit Tracing/"On the Biology of an LLM" March 2025; "Emergent Introspective Awareness" October 2025). Model welfare program launched April 2025 (Kyle Fish).
  • Signature text: Dario Amodei, "Machines of Loving Grace" (October 2024) — powerful AI by 2026–2027 could compress a century of biomedical progress into 5–10 years; follow-up "The Adolescence of Technology" (January 2026) on alignment-faking and scheming in Claude 4 Opus.

5.9 Sakana AI — evolutionary model merging

  • Key figure: David Ha; Llion Jones (Attention Is All You Need co-author); Takuya Akiba; Robert Lange.
  • Philosophical stance: Nature-inspired, collective intelligence, quality-diversity over monolithic scale.
  • Flagship: Evolutionary Model Merging (Akiba et al., arXiv 2403.13187, March 2024; Nat Mach Intell 2024) producing EvoLLM-JP/EvoVLM-JP. The AI Scientist (Lu et al., arXiv 2408.06292, August 2024) — end-to-end automated research at ~$15/paper. AI Scientist-v2 (arXiv 2504.08066, April 2025) — tree-search; produced the first AI-generated paper to pass ICLR 2025 workshop peer review. Transformer² (arXiv 2501.06252, January 2025) — self-adaptive weights via singular-value fine-tuning. M2N2 (arXiv 2508.16204, 2025) — quality-diversity niche-based merging.
  • Signature claim: "Our new system paves the way for a new generation of adaptive AI models… embodying living intelligence capable of continuous change and lifelong learning."

5.10 VERSES AI — active inference

  • Key figure: Karl Friston (Chief Scientist); Gabriel René (CEO).
  • Philosophical stance: Free energy principle as the unifying principle of intelligence; active inference as the practical method.
  • Flagship: Genius (commercial launch April 30, 2025) — Bayesian/active-inference agentic toolkit; AXIOM (June 2025) beating top deep-RL baselines on Atari; Renormalizing Generative Models (RGMs) — "From pixels to planning: scale-free active inference" (2024); August 2024 open letter to OpenAI proposing collaboration; Gartner 2025 Emerging Tech Impact Radar for Spatial AI; IWAI 2025 sponsorship.
  • Signature claim (René): RGMs are "a fundamental shift in how we think about building intelligent systems from first principles… the 'one method to rule them all.'"
  • Caveats: Benchmark claims are largely self-reported pending independent replication.

5.11 Symbolica, Imandra, ExtensityAI — neuro-symbolic

  • Symbolica (George Morgan, out of stealth April 2024, $31M Khosla-led Series A; advisor Stephen Wolfram). Categorical deep learning: co-authored position paper with DeepMind (Veličković et al.) generalizing geometric deep learning via endofunctor algebras. Limited product output through 2025; research programme around non-autoregressive structured reasoning.
  • Imandra (Passmore, Ignatovich, Austin). ImandraX (February 2025) — new reasoning engine with first formally verified proof checker for neural-network safety properties. CodeLogician (March 2025) — LangGraph agent converting source code into formally verified models in IML. Imandra Universe (June 2025) — MCP access to symbolic reasoning from Claude/ChatGPT/Cursor. Benchmark closing a 41–47pp accuracy gap vs LLM-only on code tasks.
  • ExtensityAI (Marius-Constantin Dinu, Austria). SymbolicAI framework (arXiv 2402.00854, February 2024; formally published at CoLLAs 2025). Treats LLMs as semantic parsers; contractual programming with pre/post-conditions; VERTEX benchmark.

5.12 GenBio AI — digital organism

  • Key figure: Eric Xing; Le Song.
  • Philosophical stance: Multiscale organismic intelligence — life is intelligible only across scales from DNA to tissue.
  • Flagship: AIDO — AI-Driven Digital Organism framework (arXiv 2412.06993, December 2024). Phase 1 released December 2024: AIDO.DNA (7B, 796 species), AIDO.RNA (1.6B, 42M ncRNA — largest of its kind), AIDO.Protein (MoE), AIDO.Cell (3M–650M, 50M human cells with full transcriptome context), AIDO.StructureTokenizer. 2025 expansions: AIDO.StructurePrediction (AlphaFold 3-style multi-biomolecule), AIDO.Protein-RAG, AIDO.Tissue (spatial transcriptomics).
  • Signature claim: "To truly understand life, we must model and simulate it across every scale… biology becomes computable, predictable, and ultimately programmable."

5.13 Safe Superintelligence — alignment-first

  • Key figure: Ilya Sutskever (co-founder, June 2024); Daniel Gross; Daniel Levy.
  • Philosophical stance: Build superintelligence directly, with alignment as the sole product constraint, no intermediate commercial distractions.
  • Flagship: Almost entirely stealth. Reported $5B+ valuation by late 2024, $30B+ by 2025. No public model. Philosophical significance is the bet itself: that safe ASI requires dedicated focus, no CEO noise, no product cadence. Sutskever's departure from OpenAI and reconstitution around this thesis is the most consequential single-person move in recent AI history.

5.14 Liquid AI, Rain AI, Extropic — alternative substrates

  • Liquid AI (Hasani, Lechner, Amini, Rus — MIT CSAIL spinout, 2023). Liquid Foundation Models (LFM v1 October 2024; LFM2 July 2025; Liquid Nanos September 2025; LFM2 technical report arXiv 2511.23404, November 2025). Continuous-time differential-equation substrate rooted in Liquid Time-constant Networks (AAAI 2021). $250M Series A (December 2024, AMD Ventures) at $2.3B valuation.
  • Rain AI (Altman-backed, 2017–2025). Pursued digital in-memory compute, claimed up to 10,000× energy-efficiency improvements for training. Cautionary tale: Series B stalled; by May 2025 seeking a buyer; being circled by OpenAI, Nvidia, Microsoft.
  • Extropic (Verdon, McCourt, 2022). Thermodynamic computing — use transistor thermal noise as entropy for sampling; pbits/pdits/pmodes composed into a Thermodynamic Sampling Unit (TSU). X0 prototype (Q1 2025), XTR-0 development platform (Q3 2025), Z1 production chip (early access 2026). Denoising Thermodynamic Models paper (arXiv 2510.23972, October 2025) claims ~10,000× lower energy per sample vs GPU diffusion on Fashion-MNIST-scale benchmarks in simulation. Open-source thrml simulator.