⌂ Me Now Musings Mission Artifacts Foundations Readings Experiments Frontier Decision Engineering Projects Resume

Shared bibliography · Updated April 21, 2026

Survey References

  • Albantakis, L., et al. (2023). Integrated information theory (IIT) 4.0: Formulating the properties of phenomenal existence in physical terms. PLoS Computational Biology, 19(10). https://doi.org/10.1371/journal.pcbi.1011465
  • Anthropic. (2022). Constitutional AI: Harmlessness from AI feedback. https://www.anthropic.com/research/constitutional-ai-harmlessness-from-ai-feedback
  • Anthropic. (2024a). Mapping the mind of a large language model. https://www.anthropic.com/research/mapping-mind-language-model
  • Anthropic. (2025a). Tracing the thoughts of a large language model. https://www.anthropic.com/research/tracing-thoughts-language-model
  • Anthropic. (2025b). Signs of introspection in large language models. https://www.anthropic.com/research/introspection
  • Anthropic. (2025c). Exploring model welfare. https://www.anthropic.com/news/exploring-model-welfare
  • Aristotle. (2004). Nicomachean Ethics (R. Crisp, Trans.). Cambridge University Press.
  • Bai, Y., et al. (2022). Constitutional AI: Harmlessness from AI feedback. arXiv:2212.08073.
  • Bengio, Y. (2017). The consciousness prior. arXiv:1709.08568.
  • Bertalanffy, L. von. (1968). General System Theory. George Braziller.
  • Butlin, P., Long, R., et al. (2023). Consciousness in artificial intelligence: Insights from the science of consciousness. arXiv:2308.08708.
  • DeepSeek-AI. (2025). DeepSeek-R1: Incentivizing reasoning capability in LLMs via reinforcement learning. arXiv:2501.12948.
  • Descartes, R. (1996). Meditations on First Philosophy (J. Cottingham, Trans.). Cambridge University Press. (Original work published 1641)
  • Descartes, R. (1998). Discourse on Method and Related Writings (D. A. Cress, Trans.). Hackett. (Original work published 1637)
  • Dreyfus, H. L. (1992). What Computers Still Can't Do. MIT Press. (Original work published 1972)
  • Extropic. (2025). Thermodynamic computing: From zero to one. https://extropic.ai/writing/thermodynamic-computing-from-zero-to-one
  • Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11, 127-138. https://doi.org/10.1038/nrn2787
  • Friston, K., Parr, T., & de Vries, B. (2022). Active Inference: The Free Energy Principle in Mind, Brain, and Behavior. MIT Press.
  • GenBio AI. (2024a). Introducing GenBio AI. https://genbio.ai/introducing-genbio-ai/
  • GenBio AI. (2024b). GenBio AI releases phase 1 of the world's first digital organism to transform medical research. https://genbio.ai/genbio-ai-releases-phase-1-of-worlds-first-digital-organism-to-transform-medical-research/
  • Goodfellow, I., et al. (2014). Generative adversarial nets. NeurIPS.
  • Google DeepMind. (2025a). AlphaEvolve: A Gemini-powered coding agent for designing advanced algorithms. https://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/
  • Google DeepMind. (2025b). Gemini Deep Think achieves gold-medal standard at the International Mathematical Olympiad. https://deepmind.google/discover/blog/gemini-deep-think-achieves-gold-medal-standard-at-the-international-mathematical-olympiad/
  • Google DeepMind. (2025c). Genie 3: A new frontier for world models. https://deepmind.google/discover/blog/genie-3-a-new-frontier-for-world-models/
  • Hasani, R., et al. (2021). Liquid time-constant networks. AAAI Conference on Artificial Intelligence, 35(9), 7657-7666.
  • Heidegger, M. (1962). Being and Time (J. Macquarrie & E. Robinson, Trans.). Harper & Row. (Original work published 1927)
  • Hoffmann, J., et al. (2022). Training compute-optimal large language models. arXiv:2203.15556.
  • Hume, D. (2000). A Treatise of Human Nature. Oxford University Press. (Original work published 1739-1740)
  • IEA. (2026). Electricity 2026. https://www.iea.org/reports/electricity-2026
  • Kant, I. (1998). Critique of Pure Reason (P. Guyer & A. Wood, Trans.). Cambridge University Press. (Original work published 1781)
  • Kaplan, J., et al. (2020). Scaling laws for neural language models. arXiv:2001.08361.
  • Lake, B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2017). Building machines that learn and think like people. Behavioral and Brain Sciences, 40, e253. https://doi.org/10.1017/S0140525X16001837
  • Landauer, R. (1961). Irreversibility and heat generation in the computing process. IBM Journal of Research and Development, 5(3), 183-191. https://doi.org/10.1147/rd.53.0183
  • LeCun, Y. (2022). A path towards autonomous machine intelligence. Open Review / position paper. https://openreview.net/forum?id=BZ5a1r-kVsf
  • Liquid AI. (2025). Introducing LFM2: The fastest on-device foundation models on the market. https://www.liquid.ai/blog/liquid-foundation-models-v2-our-second-series-of-generative-ai-models
  • Locke, J. (1975). An Essay Concerning Human Understanding. Oxford University Press. (Original work published 1689)
  • Meadows, D. H. (2008). Thinking in Systems. Chelsea Green.
  • Merleau-Ponty, M. (2012). Phenomenology of Perception (D. A. Landes, Trans.). Routledge. (Original work published 1945)
  • Meta AI. (2025a). Announcing V-JEPA 2: New state-of-the-art video world model and benchmarks. https://ai.meta.com/blog/v-jepa-2-world-model-benchmarks/
  • Meta AI. (2025b). V-JEPA 2: Self-supervised video models enable understanding, prediction and planning. https://arxiv.org/abs/2506.09985
  • Mouret, J.-B., & Clune, J. (2015). Illuminating search spaces by mapping elites. arXiv:1504.04909.
  • Newell, A., & Simon, H. A. (1976). Computer science as empirical inquiry: Symbols and search. Communications of the ACM, 19(3), 113-126.
  • NVIDIA. (2025a). NVIDIA launches Cosmos world foundation model platform for physical AI. https://nvidianews.nvidia.com/news/nvidia-launches-cosmos-world-foundation-model-platform-for-physical-ai
  • NVIDIA. (2025b). NVIDIA expands Cosmos world foundation models and physical AI data tools. https://nvidianews.nvidia.com/news/nvidia-expands-cosmos-world-foundation-models-and-physical-ai-data-tools
  • OpenAI. (2024a). Learning to reason with LLMs. https://openai.com/index/learning-to-reason-with-llms/
  • OpenAI. (2024b). Deliberative alignment: Reasoning enables safer language models. https://openai.com/index/deliberative-alignment/
  • OpenAI. (2025a). Introducing OpenAI o3 and o4-mini. https://openai.com/index/introducing-o3-and-o4-mini/
  • OpenAI. (2025b). Introducing GPT-5. https://openai.com/index/introducing-gpt-5/
  • Pearl, J. (2000). Causality. Cambridge University Press.
  • Pearl, J. (2019). The seven tools of causal inference, with reflections on machine learning. Communications of the ACM, 62(3), 54-60. https://doi.org/10.1145/3241036
  • Pearl, J., & Mackenzie, D. (2018). The Book of Why. Basic Books.
  • Physical Intelligence. (2024). pi_0: A vision-language-action flow model for general robot control. https://arxiv.org/abs/2410.24164
  • Physical Intelligence. (2025a). pi_0.5: A vision-language-action model with open-world generalization. https://www.physicalintelligence.company/blog/pi05
  • Physical Intelligence. (2025b). pi_0.6 / pi-star: A VLA that learns from experience. https://www.physicalintelligence.company/blog/pi0-6
  • Safe Superintelligence Inc. (2024). Safe Superintelligence Inc. https://ssi.inc/
  • Sakana AI. (2024). Evolving new foundation models: Unleashing the power of automating model development. https://sakana.ai/evolutionary-model-merge/
  • Sakana AI. (2026). The AI Scientist: Towards fully automated AI research, now published in Nature. https://sakana.ai/ai-scientist-nature/
  • Seth, A. (2021). Being You. Dutton.
  • Seth, A. (2024). Conscious AI and biological naturalism. Behavioral and Brain Sciences, 47, e27.
  • Snell, C., et al. (2024). Scaling LLM test-time compute optimally can be more effective than scaling model parameters. arXiv:2408.03314.
  • Spinoza, B. (2002). Complete Works (S. Shirley, Trans.). Hackett. (Original work published 1677)
  • Stanley, K., & Lehman, J. (2015). Why Greatness Cannot Be Planned. Springer.
  • Sutton, R. S. (2019). The bitter lesson. http://www.incompleteideas.net/IncIdeas/BitterLesson.html
  • Turing, A. M. (1936). On computable numbers, with an application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, 42(2), 230-265. https://doi.org/10.1112/plms/s2-42.1.230
  • Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433-460. https://doi.org/10.1093/mind/LIX.236.433
  • Vaswani, A., et al. (2017). Attention is all you need. NeurIPS.
  • VERSES. (2024). From pixels to planning: Scale-free active inference. https://www.verses.ai/
  • VERSES. (2025a). Genius platform. https://www.verses.ai/
  • VERSES. (2025b). AXIOM. https://www.verses.ai/
  • Whitehead, A. N. (1978). Process and Reality. Free Press. (Original work published 1929)
  • Wiener, N. (1948). Cybernetics. MIT Press.
  • World Labs. (2025). Marble: A multimodal world model. https://www.worldlabs.ai/blog/marble-world-model
  • xAI. (2025a). Grok 3. https://x.ai/news/grok-3/
  • xAI. (2025b). Grok 4. https://x.ai/news/grok-4/
  • xAI. (2025c). Colossus. https://x.ai/colossus
GitHub · LinkedIn · Twitter · pc3197@columbia.edu

© 2024–2026 Prabakaran Chandran