Many firms under our coverage with an artificial intelligence theme were trading in 1- or 2-star territory. Our valuations were already positioned for a pullback like this, as we had difficulty justifying the revenue increases implied by these valuations. We view the current pullback as healthy, even as we remain positive about the long-term potential of AI. We have maintained our fair value estimates across the affected companies.
After the release of DeepSeek’s R1, our thesis is that we’ll see sleeker, more efficient AI models that don’t rely on massive clusters of AI GPUs and related hardware. This is the only way the ecosystem can address large numbers of use cases in the long term.
We believe that lower costs—making AI cheaper and more economical—increase the number of use cases it is viable for, and as a result, they should increase demand. This is the same path the PC revolution followed, with computing power becoming cheap enough that millions of individuals could use the technology at an affordable cost. The same happened with the cloud and SaaS revolution, wherein the incremental cost of adding users was close to zero. We believe a future in which AI was both prohibitively expensive and “taking over the world” was not likely. As such, we view the advancements made by DeepSeek as promising and healthy for the overall ecosystem.
Winners and Losers of the New AI Landscape
We view companies solely reliant on monetizing large language models as the biggest losers (these tend to be private companies), since creating a moat in this space is difficult. In the long term, we see LLMs as commodities whose key advancements are easily replicable, and the big winners will be the cloud infrastructure providers, along with those who can provide integration and performance benefits at the application layer.
The companies selling the hardware to the cloud providers, equipment companies, and those dependent on energy consumption are somewhere in the middle. We still see healthy demand in this space for years to come, but we admit the uncertainty and risks have increased for that part of the value chain. No one knows where exactly the trade-off between efficiency benefits and increased demand will balance.
We believe a bear case must contemplate dramatic cost declines with minimal performance improvements. Hyperscalers would have to slash future capital expenditures as they choose to build AI with lower capital intensity. We do not think this is likely. Instead, we anticipate that US and European modelbuilders will still rely on expensive but high-performing “AI factories” (as termed by Jensen Huang, CEO of Nvidia) for future model iterations. These companies will have the advantage of deploying more best-of-breed AI accelerators and infrastructure to make advancements.
The author or authors do not own shares in any securities mentioned in this article. Find out about Morningstar’s editorial policies.