Is Data Science Dead? Or What Makes the Field More Alive?
Decision engineering: my personal take on what data science should really be.
For quite some time now, I have been thinking about this term: decision engineering. It is not just because I once held the designation “Decision Scientist” in my first role, or because my entire organization at Mu Sigma bet heavily on decision sciences rather than simply calling it data science. There is a very visible reason I can witness clearly today, one that was already taking shape back then but has become even sharper with time.
My Early Experience at Mu Sigma
At Mu Sigma, we never worked in isolation. Everyone operated end-to-end. It was never “here is a dataset, build a random forest and give it back to me.” Though a couple of engagements might have leaned that way if clients pushed for it, the company as a whole focused on something much deeper: helping enterprises make better decisions and building systems that enable those decisions every single day. We thought about how the solution would sit comfortably inside the client organization, not just deliver a model in a Jupyter notebook. This was operating within the system.
What Decision Science Really Means
Across different segments of the community, the definition of decision science varies. For example, some people define it mainly as an optimization problem, because ultimately decision-making has to deliver optimal or at least better decisions, and optimization clearly helps there. But the reason we call it decision science is simple: the end outcome is decisions, not models. To reach those decisions we need a scientific, structured approach with real methodological backup.
Take customer-facing problems in most CPG or retail companies. The focus is on understanding why the problem exists from the customer side: how you can design something that genuinely takes care of customer perception and behavior. One such project I worked on was an agent-based simulation problem. It was about trying to understand how customers would behave when you launch a new product or a new campaign. We simulated the entire community or customer base, saw what went wrong and what went right, and then tweaked our decisions accordingly. It was exactly like the world models we talk about today in generative AI, except we were doing it years earlier for business decisions. The same logic applied to market simulators we built to understand how pricing, customer promotions, and trade promotion strategies would really impact market shares, and therefore how we could improve them. Budgeting, supply-chain planning, and many other areas followed the same pattern.
At the end of the day, none of these were about building random dashboards or isolated models. They were about creating systems that helped people take decisions or validate decisions in real time.
Why Data Science Exists
That early experience set a very strong prior for me. Most people who learn data science through a college degree, especially the BTech or BS programs in India, are introduced mainly to model-building. At most, they learn statistical models, and then many quickly pivot to computer vision or NLP because there are better job opportunities or higher salaries. That is how the mindset often develops. Recently I had a conversation on Twitter with one of my connections. He expressed doubt about why people even study data science when computer science people are also learning machine learning, and he observed that CS seems like the better option for job prospects. I do not want to defend data science as the single best degree in the world. But one thing is clear: the field exists for a reason, and we cannot judge it only by what any single curriculum teaches, because everyone’s definition and priorities differ. People come from cognitive science, psychology, policy backgrounds; the way they see opportunities and shape their learning space is naturally different.
My own control theory and control systems background gave me a much better view on data science. Over six or seven years I never confined myself to just building models. It was always about how I could help companies design a process, think in systems, and think in design perspective. How could data and machine learning actually solve the problem? Sometimes the problems were very blurry. We had to spend a lot of time just defining them before touching any code.
For example, there was the Shrimp Trade Atlas project. We were prototyping an economic index for shrimp cultivation and shrimp demand, supply and demand, by pulling together satellite imagery, in-situ data, exports, and imports. There was no visible, pre-defined problem or any company already doing it this way. We had to define the problem ourselves: if we feed this information in, decision-makers will be able to navigate where the potential supply and demand zones are and how to prioritize. From a traditional data science curriculum lens, people might ask, “Where is the data? What model are you building: decision tree, random forest, XGBoost?” But that comes second. The first stage is designing the decision itself: if I decide this way, what data would I need and what inference or modeling techniques would make it better?
This is exactly why the field of data science exists. Chris Wiggins, applied math professor at Columbia and Chief Data Scientist at The New York Times, captured it beautifully in his book How Data Happened. He shows how people have used data to make decisions and build strategy for hundreds of years, from early attempts to design the perfect inheritance or understand how parental status impacts children’s success, to the very invention of regression itself being tied to such strategic and even political narratives. The same spirit continued at NASA, in big companies, in winning elections, developing better drugs, reducing congestion, and solving all kinds of peculiar, high-stakes problems.
The Interdisciplinary Power of Data Science Institutes
Look at the major data science institutes in the United States: Harvard Data Science Institute, Columbia Data Science Institute, Chicago, Brown. These are not just degree-providing institutions. They are research and problem-solving institutes with a very clear objective: this work will make things better and create real impact. They bring together people from neuroscience, policy, public health, cognitive science, psychology, artificial intelligence, and more. Real-world problems cannot be solved from a single theoretical viewpoint or one specific sub-space. That is why they emphasize interdisciplinary research and move from foundational theory to applied work. They advance better methods to understand data better, because when problems and data evolve in complexity, you cannot keep using the same old regression for everything.
The Real Value for Companies
For companies and enterprises, this mindset creates jobs and real value. Companies have always had software developers, but just building software often lacks visibility and a proper feedback loop. Data scientists sit right on that feedback loop. We see the software, the customers, and the internal systems as one interconnected whole. We ask: How is this product actually enabling customers to buy more? Where are the frictions? How have customer preferences changed over time? A software engineer in a CPG company may not naturally think about those customer implications, but a data scientist must.
In my previous company, the software engineering team was building a tool for engineers to do troubleshooting better. As a data scientist, I was thinking about how people were actually using the product, where the frictions were, and how we could solve some of those troubleshooting problems directly from the data itself, reducing time consumed and improving the experience.
Another powerful example from my first company: power trading. We had to predict how much solar power would be produced so the company could confidently bid on the grid. If you supply extra, you lose money; if you supply less, you pay penalties. The solution was never just one model. We had to build additional scaffolds around uncertainty, risk management, and real-time decision rules so we could save penalties and not miss opportunities. That is decision engineering in action.
Why I Call It Decision Engineering
Today, in April 2026, many of the data science jobs I see still test mainly for SQL pipelines, dashboards, and simple A/B testing case studies. Meanwhile, the narrative that “data science is dead” keeps circulating because LLMs can now generate basic models or dashboards. But that narrative only applies to the commoditized layer: the part where people have always seen data science as “build dashboards and models, just the models.” According to the No Free Lunch theorem, there is no one model that solves everything. Generic tools still need smart, context-aware adaptation.
That is why I now prefer to frame my work as decision engineering. It is not just saying “you can do this.” It is doing it. My thought processes are different because they are mapped to the real objectives defined by the broader data science community and the data science institutions over time. We are going back to those principles and adapting them.
For example, I am currently reading a paper on neural causal abstractions from the Causal AI Lab at Columbia University by Professor Elias Bareinboim. I actually took a course from him. The work shows how to extract causal abstractions from high-dimensional data: pixels, individuals, fMRI images, or economic indicators like GDP. Foundational research has proved the theorems and provided the methodology. The next step is taking it forward to real business applications: how can I use this in a CPG company to understand customer segments at different levels? If I run an intervention, how will this segment or the overall customer base behave? That bridge from theory to messy real-world deployment is decision engineering.
Many people still follow a flawed approach: they learn a few things from bootcamps or YouTube videos and then stick to what they know, endless feature engineering on the same algorithms. They never experiment with new methods or develop new systems. In contrast, I have seen what motivated graduate students in MS data science programs are actually doing. One classmate simulated how Waymo’s autonomous traffic would face problems in New York and solved it with reinforcement learning. Another developed new memory mechanisms for question answering from videos. Capstone projects have explored market-mix modeling combined with segmentation and even used LLMs as judges to validate search-and-recommendation systems. These students are learning to identify important problems, seek inspiration from foundational research, and develop new methodologies.
As Richard Hamming said, we should focus on important problems, not just important algorithms. Whether it is recommendation systems that truly help people discover better things instead of brain-rotting content, or demand forecasting that also captures short-term shocks, today’s competitor ad or offer can disturb the market in the next week, the thought process is what makes the difference.
Data science gives us the freedom to specialize while staying holistic. Some may approach everything through a probabilistic lens, others through causal machine learning, and others through neural networks. The key is letting the problem pull in the right tools: conformal prediction for uncertainty, dynamic systems for weather domains, behavioral economics for persuasion and policy. One of my classmates is combining behavioral economics, Kahneman, Thaler, the flaws, judgments, and noise in human decisions, with agents, infrastructure, statistics, and machine learning. That intersection is where the real distinction from pure software engineering or computer science lies.
This is my signal to students, to organizations, and to the entire community. When you look for data science talent or when you step into learning the field, think beyond models and dashboards. Look for people who can discover new problems, wear multiple hats, bridge research and practice, and engineer decisions that make the whole system better: customers, incentives, second-order effects, and long-term impact.
Data science is not software engineering. It is not consulting and also not dashboarding. It is the unique discipline that sits at the intersection and has enormous untapped potential. That is the prior I refuse to lose, no matter how many narrow, potential-limiting roles or cliched job descriptions I see.