Post

Introducing “nudge_” – A Newsletter for the Inquisitive Grad Mind

For a long time, I’ve been sharing thoughts and insights here and there—especially on LinkedIn—and many of you know me from those snippets. Now, I’m excited to bring all that informal writing into a more structured and purposeful format. (10-10-2025, Issue:1, Curations and Recommendations done by Human, Editorial : transcribed from Pracha's Voice Notes. )

Introducing “nudge_” – A Newsletter for the Inquisitive Grad Mind

The result is “nudge_,” a newsletter crafted specifically for my fellow MS grad students—especially those navigating the dynamic world of artificial intelligence, machine learning, data science, research engineering, and everything in between. In “nudge_,” you’ll find a carefully curated mix of content designed to inspire and inform. I want to help you navigate complexities, discover new ideas, and gain practical insights. Whether it’s career advice, job preparation tips, industry trends, or highlights from leading researchers and innovative projects, this newsletter aims to be your intellectual companion.

There’s also a little backstory: I initially hoped to serve on our student council’s professional resources team, but when that didn’t pan out, I decided to channel the same energy into something everyone could benefit from.

Think of “nudge_” as a structured, multi-faceted resource that brings together career insights, learning opportunities, industry updates, and thought-provoking research. It’s all about nudging you toward new perspectives and unique opportunities that might not be on your radar yet. Welcome to “nudge_,” and I hope it becomes a valuable part of your grad school journey!

Image

📚 Book Corner: Spotlight on “The Scaling Era”

Image

In this inaugural issue of nudge_, we’re kicking things off with a book that’s both timely and deeply insightful: “The Scaling Era: An Oral History of AI, 2019–2025” by Dwarkesh Patel and Gavin Leech.

Why start with this book? Simply put, it’s a fresh-off-the-press, comprehensive look at the whirlwind evolution of AI over the past few years, capturing the voices and perspectives of some of the field’s most influential figures. Patel and Leech have curated a series of conversations that dive into everything from compute scaling and data architectures to alignment challenges and the next frontiers of large language models.

In particular, the chapter on safety—featuring insights from Anthropic’s Dario Amodei and Google DeepMind’s Demis Hassabis—offers a nuanced look at how leading AI thinkers are approaching the tricky terrain of making powerful models safer and more reliable. It’s not just a random walk through AI topics; it’s a thoughtfully structured oral history that gives you a distilled snapshot of where we are and where we’re heading.

For any grad student in data science, machine learning, or related fields, “The Scaling Era” is a perfect starting point. It not only gives you a sense of the big picture but also nudges you to think critically about the challenges and opportunities that lie ahead in our field.

So, let’s start this journey with a book that’s sure to spark some thoughtful conversations and maybe even reshape how you see the AI landscape. Happy exploring!

👤 Leader’s Corner: Dario Amodei and the Vision of Powerful AI

For this first issue of nudge_, I wanted to start the Leader’s Corner with someone whose thinking genuinely inspires me — Dario Amodei, the CEO of Anthropic. Over the past week, I’ve been reading his essay Machines of Loving Grace, and it struck me as one of the most grounded yet visionary perspectives on where AI might be headed.

What I found refreshing about Amodei’s writing is that he doesn’t use words like AGI or singularity to describe the future of AI. Instead, he talks about something he calls Powerful AI — systems capable of accelerating human progress across some of humanity’s hardest and most essential challenges. He frames this in five big areas: biology and health, economics and poverty, peace and governance, work and meaning, and safety and alignment.

In the first area, he envisions AI helping cure diseases like cancer, Alzheimer’s, and other genetic and infectious conditions — not as distant science fiction, but as achievable progress once we align large models with the right goals and guardrails. He describes AI as a potential Nation of Geniuses — a powerful collective intelligence, operating in data centers, capable of analyzing problems faster and deeper than any individual human mind. But with that power comes new risks: emergent behaviors, opacity, and the urgent need for steerable, interpretable, and transparent systems that remain aligned with human values.

I found this idea deeply resonant. It’s a vision of AI not just as a tool, but as a collaborator — one that can amplify discovery and problem-solving if designed responsibly. Amodei also writes about how AI could transform governance by making judicial and policy decisions faster and more informed, or how it might reshape the meaning of work itself once machines take over more cognitive labor. In each area, he brings the conversation back to a human-centered question: how do we ensure that progress remains meaningful?

He also connects this essay to another of his writings, The Urgency of Mechanistic Interpretability, which argues that to build trustworthy AI, we must first understand how these systems think. That urgency — to make intelligence transparent — feels like a shared responsibility for all of us entering this field.

Reading Amodei’s thoughts alongside The Scaling Era gives a sense of the “big picture” we often miss in our day-to-day technical work. It’s not just about deploying models or tuning metrics; it’s about learning to see AI as a generational instrument of change — one that can serve society if guided with care and intention.

As graduate students and emerging researchers, this is perhaps our real task: not just to learn the tools, but to identify the problems worth solving. Amodei’s vision of “Powerful AI” reminds us that the frontier of this field isn’t defined by capability alone, but by the wisdom with which we choose to use it.

Machines of Loving Grace

🏢 Company Corner: Thinking Machines and the Art of Building Fast

In this section of nudge_, I wanted to start highlighting companies that are doing truly interesting work — not just because of their products, but because they represent the kind of innovation and curiosity we, as graduate students, can aspire toward. These are the kinds of places we might one day want to work at, or at least learn from, to understand what kind of mindset and preparation such environments expect.

One company that recently caught my attention is Thinking Machines, founded by Mira Murati, who previously served as the CTO of OpenAI. The company is quite new, but what they’re doing already feels exciting. Their first product, called Tinker, is an API that allows people to train large language models efficiently using LoRA — short for Low-Rank Adaptation. Learn More

For those of us who’ve experimented with LoRA or QLoRA to fine-tune large models, the concept is familiar but not always easy to scale. Many of us know the struggle of limited GPU access or infrastructure bottlenecks that make research iterations painfully slow. Tinker seems to solve exactly that problem. It wraps together both the code and the infrastructure so that researchers, developers, and builders can quickly run experiments without worrying too much about setup or compute constraints.

I found this particularly meaningful because it reflects a larger trend — the democratization of model training. Tools like this make it easier for individuals and small teams to work on serious problems that once required massive resources. For us as students, it’s also a reminder that staying curious about such companies helps us understand where the field is heading, what skill sets are becoming valuable, and what kind of north star preparation we should aim for in our own journey.

Going forward, this section will continue to introduce interesting companies, research labs, and initiatives that are shaping the future of AI. The idea is not to overwhelm, but to develop a consistent practice of exploration — to build an edge through awareness, reflection, and curiosity.

✉️ Closing Notes: Let’s Keep the Conversation Going

I’ve started nudge_ as a way to share structured insights and fresh ideas with all of you, my fellow grad students. My hope is that each week you find a little nudge — not a sledge, as Richard Thaler would say — that helps you step beyond textbooks and assignments into the wider world of what’s happening in AI and beyond.

If you find these ideas interesting, or if you have your own thoughts, feedback, or even suggestions on what you’d like to see in future issues, I’d love to hear from you. You can always reach out to me via LinkedIn, X, or email. Your input will help me keep improving and making this newsletter a little better every week.

Think of this as a space where we can all explore together — a way to stay in touch with the ideas and innovations that give us that extra edge. After all, learning is a journey, and sometimes all we need is a little nudge in the right direction.

Thank you for reading, and I can’t wait to share more with you in the next issue

This post is licensed under CC BY 4.0 by the author.