Just the other day, we were expecting a "ChatGPT moment" for robotics and a few other fields. The fear—or hope, depending on where one stands—has shifted. The "DeepSeek Moment" is framed much like the "Sputnik Moment", a defining shock in global innovation, notably because it emerged outside the United States. But for us, the real surprise wasn’t geography. It was somewhat about the extent of improvement. The real shock was that DeepSeek didn’t come from a conventional tech powerhouse. A finance company had just built one of the most capable AI models in the world. Unexpected? Yes. But in some ways, perhaps not.
In trying to understand what other DeepSeek moments are waiting to happen in unexpected corners, it is important to discuss the factors that led to such an unexpected change in LLM-making and why this could happen again in some fields but not all. The real story is partly about the possibility of innovation from outside the United States and the need to retain a global perspective. The deeper question is: how many fields, how many domains, and how many industries are currently waiting for their own DeepSeek moment of sudden accelerating, democratizing, and invigorating significant innovation moments?
The Power of The Periphery: Where We Thought We Had a Chance
The DeepSeek moment wasn’t about a tech giant or a well-funded startup. It was about a group of researchers working on the fringes who saw what others didn’t. They weren’t beholden to the same constraints, the same orthodoxies, or the same investors. This is the "where" that matters: the places where the rules are different, where the constraints breed creativity, and where the outsiders become the pioneers.
When we started about Eighteen months ago and early last year, we believed we could create a substantially different technology - something like what DeepSeek achieved spectacularly - for our purposes in a small amount. We were not interested in merely building on top of other open-source models but working the ground up. Unfortunately, we could not find enough fast-decision-making capital in our surroundings to begin this journey to attempt a foundational building. Still, we go back to what fuelled our ambitions over and above our confidence in bringing together the right talent and providing it with technologically and process-wise competent leadership and purpose.
We weren’t programmers, and we weren’t an AI lab. But two things supported our belief that we could do something substantial where the giant model makers appeared uncatchable.
The first was the fact that transformer knowledge was no longer locked inside elite research labs. In the past, machine learning breakthroughs required massive institutional backing, but by 2023, the open-source movement had democratized access to some of the most powerful tools. Midjourney, Mistral, Kroc, and Falcon all proved that small teams with tight budgets could build cutting-edge AI models. We wrote extensively about the ease with which cutting-edge models were being developed quickly and through small teams in the earliest articles we published, contesting the view that the LLM-making was difficult.
It was not just about the evidence of the outcome but also an analysis of the development process. A few years before, neural networks required deep dives into code for even basic operations. With little information in the public domain, everyone had to reinvent the wheel once they came across any new basic ideas in a publication or at a conference. Today, with transformers, the core mechanisms can be encapsulated in just a few lines of code, thanks to high-level libraries. For instance, the attention mechanism, the heart of transformers, can be implemented so that even someone with basic programming knowledge can grasp. Open-source frameworks like PyTorch and Hugging Face made it unnecessary to reinvent the wheel.
It is not a DeepSeek Moment if You Expect It
The second was a fundamental shift in how AI models were built. The early AI revolution was all about scale—Google, OpenAI, and Meta poured billions into training massive models, assuming that bigger was always better. The industry’s obsession with scale left room for leaner, more focused innovation.
As DeepSeek has proven, it was improbable—if not impossible—that the early transformer models had perfected their architecture in their first iterations. The process was never a clean, linear refinement but an evolving patchwork of inspired tweaks and recalibrations. Even now, with DeepSeek’s breakthroughs, this journey is far from over. We are still in the early days. With its success in shattering myths about prohibitive costs and insurmountable effort, developers worldwide are now emboldened to chase efficiency rather than sheer scale. The next wave of improvements is inevitable—and will likely come from those with the strongest incentives to optimize, not just expand.
We should expect a stream of innovations in model development emerging from all corners of the world in the coming years, dotted by a small number of substantial innovations. But it is unlikely to create the stir DeepSeek was able to cause, given that the expectation base has shifted. It was not even a few weeks ago when the headlines were about how the model developments was coming to a halt because of the power of the scaling laws, which made the DeepSeek announcement even more shock-producing. Now, the expectations are at the other end.
Where Serendipity Wins and Where It Fails
Many other fields, where expectations remain low, are ripe for surprises. Even industry outsiders with clever ideas could disrupt a few, but most require deep execution, infrastructure, and regulatory mastery, in addition to idea innovation, for the innovators to monetize. Even without thinking about monetization, it is important to analyze fields where deep domain knowledge is critical for innovations and where outsiders may have a chance.
We split the innovations into two simple categories to frame the discussions in the sections below.

Outsiders Have a Chance in Domains Where Math Rules
The development of transformer models has shown that groundbreaking advancements can arise from unconventional sources, this phenomenon manifests differently depending on the field. For instance, consider the domain of weather forecasting, where Google's AI model has outstripped traditional human-expert models. This achievement is particularly striking given that Google's team likely possessed limited knowledge of meteorology. Instead, their success stemmed from a deep understanding of mathematical transformer model formulas and equations, demonstrating that expertise in the specific domain is not always a prerequisite for innovation.
Language models provide another compelling example. The latest Chinese LLMs, trained likely without a single Gujarati-speaking programmer, can now explain Kant’s philosophy (a fiendishly difficult task) in fluent Gujarati (this author’s mother tongue). This demonstrates that AI can capture and generate complex knowledge in a language even when no human expert in that language has contributed directly to its development. The same principle extends to other specialized domains—legal contracts, healthcare documentation, and even protein folding.
In other words, it is quite possible to come across a sudden DeepSeek moment against something as celebrated as Google’s AlphaFold that has already revolutionized structural biology. In such sciences, which would also include material science, there is ample room for further breakthroughs from teams that may lack deep domain backgrounds but excel in algorithmic innovation (this argument is somewhat simplistic depending on the domain). The lesson is clear: in fields driven by cognitive and computational pattern recognition, domain expertise is becoming less of a gatekeeper, opening the door for unexpected innovators.
Robots and Drugs: Where AI Hits Reality
The same pattern of unexpected innovation is unfolding in robotics and autonomous mobility. Across the world, teams with no prior industry background are developing AI models that significantly enhance perception, decision-making, and motion planning. In robotics, leading universities in the West are partnering with corporations to push the boundaries of how machines learn to move and process visual data. We just read articles about researchers infusing intelligence, or algorithmic processing abilities, into a lamp for specific lamp-human interaction!
The same is happening in mobility, where AI-powered driverless car systems continue to improve, not just through traditional automotive giants but also via software-focused teams with no history in vehicle manufacturing. Yet, despite these cognitive advancements, the leap from theoretical success to real-world execution remains immense in these fields, as opposed to, for instance, weather forecasting.
One thing is building a model that enables a robot to recognize and grasp objects. Manufacturing that robot at scale, integrating it into supply chains, ensuring reliability, and reducing costs to make it commercially viable is entirely different. The difficulty compounds further when we move beyond hardware. In industries like consumer electronics and robotics, success isn't just about getting the technology right; it's about branding, distribution, servicing, and long-term adoption. The lesson from the last decade of failed robotics startups is clear—cognitive capability alone does not translate into market dominance.
Nowhere is this contrast sharper than in drug discovery. AI models can now propose molecular structures in minutes, but taking a drug from concept to market requires navigating clinical trials, safety approvals, healthcare regulations, and distribution channels. This is where the real bottleneck lies. Unlike AI language models, where deployment is instant, biotech breakthroughs require years of rigorous real-world validation. The difference between pure cognitive innovation and execution-heavy industries is stark—some fields allow for rapid disruption by unconventional players, while others demand deep industry expertise and long-term operational mastery.
The Next DeepSeek Moment
DeepSeek has made it official: The boundaries of innovation have shifted. World over, teams feel encouraged to attempt something bold with AI algorithms to create something new. As a result, innovation announcements are likely to be thick and fast, which will have problems for anyone in these fields who must consistently be at the top of the pack to monetize.
Even last year’s disruptors could suddenly find themselves disrupted where efficiency, rather than sheer scale, becomes the defining competitive edge. Robotics, biotech, and mobility are ripe for breakthroughs, but only in some parts of these fields. The cognitive layers may see fast-moving upstarts succeed, but the execution-heavy layers will still demand expertise, supply chains, and regulatory experience. In conclusion, if earlier what was speeding up innovation was the combined weight of humanity’s best brains, capital, and machine capabilities, now there is also new hope and energy.