The launch of DeepSeek’s R1 model sent shockwaves through the AI ecosystem, momentarily jolting global equity markets. As the initial shock has subsided and a deeper understanding has gained hold, the development has reshaped our outlook on AI and the companies likely to benefit from it.
From an investment perspective, the most crucial takeaway is that key AI developments are ongoing, with rapid innovation and enhanced efficiency likely accelerating AI adoption. Put simply, the more AI improves and becomes increasingly cost-effective, the more people will use it. In turn, more widespread adoption will bolster the ecosystem overall. However, as we’ve pointed out since the launch of ChatGPT, these benefits will not be distributed evenly.
For investors, this development underscores two key tenets of thematic investing:
- Early winners don’t always stay on top.
- Allocating to a theme via informed active management—grounded in a thorough assessment of the value chain and those best positioned to create and capture value—can deliver better results than investing in a naïve basket of theme-related stocks.
A Real Evolution
While perhaps not as revolutionary as initial reports suggested, DeepSeek’s latest model marks a significant milestone in the advancement of AI models, with far-reaching implications for the industry going forward.
Their innovative “mixture of experts” approach, though previously known in the AI development space, has demonstrated remarkable efficiency gains. Essentially, by using a “teacher” model to school multiple “student” models using a subset of the overall training data, the model substantially improved its outputs while using just a fraction of the dataset and computing resources.
That’s a big deal.
To put it in perspective, this part of the process saw efficiency gains of around 90%. Imagine if your car’s fuel efficiency jumped from 30 miles per gallon to 300. That’s how big of a change we’re witnessing.
One caveat? We suspect—and are still verifying—that the model is more limited in its applications compared to other models. By taking shortcuts to excel at solving certain types of problems and not others, it likely won’t perform as well on tasks outside its training scope. In other words, imagine teaching someone to be exceptional at history, but then giving them a chemistry exam. They probably wouldn’t do as well.
In addition, DeepSeek revealed that the key part of their modeling used computing resources costing around $5–$6 million (Display). Given the nature of the model, that makes sense, in our view. However, this figure excludes the cost of creating and validating the extensive training dataset on which the model was built.
Importantly, it suggests that an upstart AI company cannot build a model that goes from zero to 100 mph without either leveraging existing resources—potentially violating other companies’ intellectual property (IP)—or investing massive amounts of capital to develop and validate their training data. Indeed, OpenAI and Microsoft have already accused DeepSeek of violating their terms by building a competitor leveraging OpenAI’s own IP. Time will tell if these allegations hold water and what, if any, recourse might be pursued.
Lessons for the Industry
What have we learned that will influence how our portfolio managers invest in AI as a result of the DeepSeek development?
First, we’d already questioned the strength of the competitive moats for companies building AI models. This latest news suggests those moats may not be that strong after all.
Second, regarding the computer chips used to train and run AI models, there may be a larger market for lower- or mid-tier chips than previously thought. For instance, the industry’s most advanced GPUs (the most general and high-powered AI chips), such as those made by Nvidia, have been in high demand for the intensive computing needed to train next-generation AI models. However, due to the efficiency of DeepSeek’s methodology, it’s possible that forthcoming generations of models could be trained with ASICs (less flexible and cheaper chips). While predicting the future sales trajectories of leading semiconductor companies remains incredibly challenging, there’s a growing possibility of a demand “air pocket” for high-end chips in the near to medium term.
Third, for several quarters, the markets have had their eyes acutely trained on the capital expenditure plans of the Magnificent Seven, which have been spending tens of billions of dollars each on AI. Does more efficient AI training mean they might be able to reduce that spending going forward? Or will they still spend the same amount but aim to get even more bang for their buck? How much of what they’ve spent so far or are planning to spend will end up being wasted?
The Good, the Bad, and the Ugly
What does this mean for various players in the industry?
Overall, the segment that has emerged at the top of the heap is the software makers. These companies will build on top of AI models to deliver more efficient services for their customers. The cheaper the AI models and associated computing costs, and the more affordable they can make their services for users, the faster AI can gain traction for actual end use cases.
Chipmakers also came into focus. While we recognize the risk of a shift in demand away from the high-end chips, we also note that the demand for computing power has consistently grown over the decades, not just because of AI in recent years. We expect this long-term trend in global computing usage and chip requirements to continue.
Doubts emerged early as to what this would mean for data center operators, since new models will be more efficient and require fewer computing resources. However, to the extent this is offset or overwhelmed by an increase in AI workloads, data center operators will still benefit.
Some of the worst-hit stocks following the DeepSeek news were utility companies and those involved in building the electrical infrastructure needed for the future. AI and other data centers are key drivers of power demand in the coming decades. Yet even before DeepSeek R1’s launch, it was widely acknowledged that dramatic efficiency gains in the AI space would be necessary and were expected in the coming years. This is just one example of such a development; we’re counting on more over time. Considering this and the other drivers of future electricity demand, we retain a positive outlook on long-term demand growth for the utility sector.
More AI Milestones
While the markets reacted with trepidation to DeepSeek’s news—particularly given that it was a Chinese competitor potentially capable of going toe-to-toe with US and European AI players—we’re happy to see more innovation and development in AI technologies.
Further improvements are needed—the better and cheaper we can make these technologies, and the more economically viable their applications become, the faster the industry can grow. This, in turn, can have a significant impact on productivity growth and improving society’s standard of living.
We’re watching for the next big development, positioning ourselves in the stocks that we think have the most potential to harness AI’s value. What’s more, we’re looking forward to additional market turbulence that may present opportunities to reweight our portfolios to take advantage of other investors’ fears.
- Christopher Brigham
- Senior Research Analyst—Investment Strategy Group