I. The Philosophy

The name comes from the math. arg max σ — find the argument that maximizes sigma. In statistics, σ is standard deviation: the distance from the mean. Maximize it, and you're as far from average as you can get.

That's not a flex. It's a design principle.

Most people in any field converge. They read the same papers, build the same things, speak the same vocabulary. The convergence isn't wrong — it's efficient. But efficiency and originality are rarely the same direction. The work I care about lives at the intersection of fields that don't usually talk to each other: control theory and cognitive science, reinforcement learning and behavioral economics, complexity science and product design.

Sigmaxing is what happens when you stop optimizing for proximity to the mean and start optimizing for the quality of your deviation. It means building the intellectual infrastructure to think differently — and having the discipline to do it in public, over time, without needing it to be finished.

This ecosystem is that infrastructure.


II. How I Think

I am not a specialist. I am a systems thinker who has chosen to specialize in problems that require not being one.

The habit I've developed over eight years is this: when I encounter a hard problem, I ask — what field has already solved the structure of this problem, even if not this instance of it? Control engineering taught me about feedback loops and stability. Complexity science taught me about emergence and non-linearity. Behavioral economics taught me that the agent in the system is never fully rational and always consequential. Machine learning taught me how to build systems that improve from evidence rather than from explicit instruction.

None of these are sufficient alone. All of them together get close to how real systems actually behave.

The through-line I keep returning to is Systems × Behavior × Intelligence. Systems, because everything can be modeled as one. Behavior, because systems with humans in them don't follow clean equations. Intelligence, because computation is the tool that makes sense of both at scale — whether the intelligence is human or machine.

I call the synthesis decision engineering: not predicting what happens next, but designing the conditions under which better decisions become more likely.


III. What I'm Working Toward

Frontier AI is not a destination. It's an ongoing redefinition of what's possible — and more importantly, what's worth doing.

The questions I find genuinely hard, and genuinely worth pursuing:

How do learning systems behave when deployed into living systems? A model trained on historical data enters a world that changes because the model is in it. The distribution shifts. The incentives shift. The feedback loops close in ways the training never anticipated. Understanding this — not just in theory but through instrumented experiments — is where I spend most of my intellectual effort.

What does it mean for a machine to decide, versus to predict? Prediction is passive. Decision is interventional. Causal inference, reinforcement learning, mechanism design — these are the tools that move us from "what will happen?" to "what should we do, and what happens to the system when we do it?" I am more interested in the second question.

How do humans and AI systems co-exist in shared decision environments? Not cooperate in the idealized sense — but actually co-exist, with misaligned incentives, bounded cognition, and incomplete information on both sides. Multi-agent RL, behavioral modeling, bounded rationality — this is the territory.

What fails silently? Most ML failures are not dramatic. They are slow drifts, quiet degradations, confidence where uncertainty should be. Interpretability through simulation and counterfactual reasoning is how I try to see what a model is actually doing rather than what it appears to be doing.

These are not research interests in the abstract. They are the questions that shape every experiment I run, every essay I write, every tool I build.


IV. The Three Modes

Thinking, building, and creating are not three hobbies. They are three modes of processing the same underlying questions — each one doing something the others can't.

Think cooc.ing ↗

Crafting Order Outta Chaos

Writing forces precision. When I cannot explain an idea clearly enough to publish it, I don't fully understand it. cooc.ing is where I go to find out what I actually think.

It sits at the intersection of complexity science, cognitive science, decision science, and intelligence — human and artificial. But the organizing question is always applied: what does this mean for how we act? Not "what is true about the world" in the abstract, but "what does this change about what we should do?"

The essays here are not summaries of papers. They are attempts to find the underlying structure connecting ideas across fields — and then to say something specific about what that structure implies.

From Fundamentals to the Frontier

Building forces honesty. You can hold a flawed mental model of something until the moment you try to implement it. The implementation tells you what you were wrong about.

researchengineer.ing is a research system — not a blog — where I practice applied research, mathematical foundations, and research engineering across subjects: ML, physics, causal inference, reinforcement learning, decision theory. Whatever the problem demands. The work here is disciplined and slow. Notes that compound. Derivations that clarify. Experiments that answer specific questions.

appliedmodels.co is the empirical counterpart — focused on the HuggingFace ecosystem. Generative models trained from scratch, fine-tuned, broken apart, analyzed. Every experiment follows the same loop: form a hypothesis, run the experiment, publish what I find, including what didn't work. Open-source. No black boxes.

Where Expression Gets to Be Untidy

Creating forces humanity. The research and the essays are disciplined. This is where the discipline relaxes.

kirukkify.ing is a digital garden and cultural archive — essays, art, photography, music notes, recipes, Tamil heritage writing, a learning logbook. Seven collections. One creative identity. No rules. It exists because the person doing all the research and building also has a cultural life, aesthetic opinions, and things they want to say that don't fit a research frame. That's not a distraction. That's the whole point.

The creative mode also has a cognitive function: it loosens the constraints that disciplined thinking imposes, which is often where the unexpected connections appear.


V. Taste & Agency

What you choose to pursue says as much as what you've built.

I am drawn to problems that are structurally hard rather than just computationally expensive. The interesting difficulty is usually not "can we train a bigger model" but "are we even asking the right question?" I find work most satisfying when it requires holding multiple levels of abstraction simultaneously — theoretical foundations, empirical evidence, and the practical constraints of real deployment.

I am skeptical of complexity added without necessity. The ecosystems I build are static sites. The experiments I run are reproducible. The tools I ship are lightweight. This is not frugality — it is a position about what makes work durable and trustworthy.

I am not interested in manufacturing expertise. I am interested in the documented, honest reality of working carefully at the edge of what I understand — including, and especially, when that edge moves faster than expected.

The agency here is deliberate. This ecosystem was not assembled. It was designed — around a set of bets about what kinds of work compound, what kinds of questions stay interesting as the field shifts, and what it means to build intellectual infrastructure rather than just intellectual output.

The bet is this: the people who will matter most in frontier AI are not those who most efficiently follow the gradient of the current benchmark — they are those who have enough intellectual breadth to see when the benchmark is wrong, and enough depth to propose something better.


VI. The Labs — Research Into Product

Research that never ships is incomplete. prachalabs is where ideas cross from understanding into use — tools built because the thinking behind them revealed a gap that a product could fill.

The Product Lab

  • broCoDDE — Collaborative engineering and content enablement.
  • Rewire — A counterfactual thinking tool. What if the decision had gone differently?
  • Bounded Agents — Agent simulation for consumer research and ecological modeling.
  • Davos Decoded — Global systems analysis, decoded.
  • Quick Wrappers — Lightweight bridges to extensive platforms.

Each product is a consequence of the research, not an escape from it. The lab doesn't close.


VII. Operating Principles

  • Applied, always. — Nothing is studied for its own sake. The question behind every essay, experiment, and tool is: what does this change about how we act?
  • Honest over performative. — No manufactured certainty. No expertise borrowed from proximity to the right names. Just the documented reality of thinking carefully while moving.
  • Lightweight and durable. — Static sites, open-source tools, reproducible experiments. Complexity is added only when it earns its place.
  • Extensible by design. — The structure holds because it was built to accommodate what doesn't exist yet. New research will open. New tools will ship. New essays will land.
  • In motion, not in a hurry. — Consistent work over time beats bursts of urgency. The ecosystem is a living system built by a living person. It changes because I do.

VIII. The Properties

  • sigmax.ing — The philosophy. arg max σ. The root.
  • cooc.ing — Think. Essays at the intersection of complexity, cognition, and decision science.
  • researchengineer.ing — Build. Applied research, mathematics, and engineering foundations.
  • appliedmodels.co — Build. Generative model experiments, open-source, empirical.
  • kirukkify.ing — Create. Digital garden, cultural archive, living sketchbook.
  • prachalabs.com — Labs. Research turned into product.
  • pracha.me — You are here. The hub.