About
My Journey in Data Science: A Foundation for Future Impact
Hello everyone, I’m Prabhakaran Chandran, aka PraCha. Previously, I was using Karan as my pen name/short name, but now I am using PraCha as it sounds so lovely.
For those in the tech and data science community, the career path is rarely linear. It’s a constant process of building, learning, and strategically deepening one’s expertise. With over six years of hands-on experience, my decision to pursue a Master’s in Data Science at Columbia University is a deliberate step in that journey. For hiring managers, tech leaders, and my peers, I want to share the “why” behind this move—it’s a story of building a solid foundation and now seeking the depth required to architect the future of intelligent systems.
A Journey of Impact Across Industries
My career has been a privilege, allowing me to solve complex problems at scale. I see my past work not just as a series of roles, but as a portfolio of tangible impacts.
My time at Mu Sigma was the launchpad. It was more than a job; it was a rigorous education in first-principles thinking, where I learned to deconstruct complex business problems for Fortune 500 clients. The breadth of my work there was foundational, spanning multiple industries and a wide array of data science disciplines:
Energy Forecasting: For a Japanese green energy firm, I engineered a hybrid machine learning solution combining physics-based models with gradient boosting to accurately predict solar energy output, directly influencing bidding strategies.
Advanced Manufacturing & Computer Vision: To improve quality assurance in 3D printing, I built and fine-tuned a multi-task deep learning architecture for product segmentation and defect detection. We also leveraged Generative Adversarial Networks (GANs) and Physics-Informed Neural Networks (PINNs) to optimize manufacturing parameters.
Pharmaceutical Research & NLP: I developed a custom BioBERT-based NER pipeline to identify and extract genetic and gastroenterological entities from scientific research papers, significantly accelerating the literature review process for bio-pharma research.
Retail & Supply Chain Optimization: I designed discrete event simulation models for retail warehouses to proactively manage inventory and implemented an evolutionary algorithm-based framework to create optimal replenishment strategies based on demand forecasts.
This foundation allowed me to lead impactful, end-to-end projects in subsequent roles:
At Informatica, I engineered a multi-agent AI system to automate the troubleshooting of cloud data integration errors. Using LangGraph, RAG, and fine-tuned LLMs (like LLaMA and Phi3), we built a system that substantially cut engineer effort and enabled many critical issues to be self-served. This wasn’t just about applying AI; it was about re-architecting an entire support workflow to enhance operational efficiency.
At Captain Fresh, I tackled a unique challenge in the global seafood supply chain. We developed computer vision models using YOLOv8 and the Segment Anything Model (SAM) to identify and segment aquaculture ponds from satellite imagery with high accuracy. I then designed and deployed the MLOps pipeline on AWS SageMaker, creating a system that gave hundreds of farmers near real-time data on harvest cycles and water quality, directly impacting procurement and sustainability.
The Realization: Moving from Application to First Principles
Across these experiences, a pattern emerged. In the fast-paced industry environment, we often focus on the immediate application of tools to solve a problem under tight deadlines. We achieve great results, but I often felt we could build more robust, more innovative, and more fundamentally sound solutions.
I recall a project modeling latent consumer behaviors like “stickiness.” My instinct was to use foundational methods like Kalman filters or dynamic factor models to uncover the unobserved drivers. However, project constraints often lead us toward more straightforward approaches. This isn’t a critique, but a reality that sparked my desire to go deeper. To truly lead and innovate, I believe one must move beyond the toolkit and master the underlying principles.
My North Star: “Decision Engineering”
This drive has crystallized into a concept I call Decision Engineering. This is my vision for the next generation of AI systems. It’s about creating frameworks that help organizations make scalable, trustworthy decisions under uncertainty. To build this future, a deep, theoretical understanding of Causal Inference, Probabilistic Modeling, and Reinforcement Learning is non-negotiable. These are the pillars for building systems that don’t just predict, but can reason, infer, and act optimally.
Why Columbia? A Strategic Choice for Growth
My choice of Columbia was intentional. Its Data Science program is uniquely anchored in three critical departments: Statistics, Industrial Engineering & Operations Research (IEOR), and Computer Science. This structure allows me to learn from the very pioneers whose work is shaping the future I want to build.
Studying Causal Inference with Professor Elias Bareinboim, probabilistic modeling with Professor David Blei, and operations research with professors like Shira Mitchell and Garud Iyengar is a direct investment in the expertise I need. This academic rigor, combined with the vibrant ecosystem of New York City—a living lab for behavioral science and diverse industries—provides the perfect environment for this next phase of my growth.
The Road Ahead
As a first-generation graduate, I am driven by the pursuit of excellence and the opportunity to solve world-class problems. My six years of experience form the foundation. This Master’s degree is the catalyst. My goal is to combine practical, battle-tested experience with deep theoretical knowledge to lead teams, build impactful products, and contribute to the practice of data science in a meaningful way.
Thank you for reading, and I look forward to sharing more of my learnings and projects.