Some Answers to AI's $600 Billion Question
Nilesh Jasani
·
July 7, 2024

In a bull market, pessimistic articles rarely get attention, so when one does, like Sequoia's "AI's $600 Billion Question" (https://shrtm.nu/eLVCSY5), one must pause. The article raises essential questions about the substantial AI upstream spending without the anticipated downstream revenue conversion. While the math may be debatable, the concerns are legitimate: where are the returns?

We previously addressed part of this value-add in "Wither Keynesianism" (https://bit.ly/3VZxf9X), suggesting AI capex may, at the least, have prevented a global recession. However, saving the world isn't the job of GPU spenders or the LLM makers. If wasteful, their capex could cause terrible issues for all involved, and not just them.

There's a chance this is all a bubble, but if not, the revenue answers won't be found at the software or application level. Understanding this is critical in also identifying the markers to form the opinions.

For months, we have discussed how the XaaS pricing model may have met its area of inapplicability with GenAI. Subscriptions to LLMs are yielding little revenue. This won't change. This is worrisome for LLM makers, but it is another indicator that the software industry's structural pressures are real. No matter how many tech leaders explain away their layoffs to over-hiring previously, there is something structural at work. We deemed it as the “supremacy flip” between software and hardware.

Assuming generative AI's use cases are genuine for corporates and consumers, the revenue evidence will rest with those selling transformative hardware. This shift might require the compute to move from the cloud to consumer pockets, putting pressure on certain types of data center and chip investments. Collectively, returns will favor hardware companies, like Qualcomm and MediaTek, in a different type of upstream, but even more with the gadget OEM makers. The future revenue could come from newer companies and newer gadgets, including robotics.

More importantly, and something we have written dozens of times, LLM applications extend beyond language and vision into biotech, health tech, and scientific fields. They're impacting driverless cars, material sciences, and more. Revenue is already picking up in these segments, but it's hard to trace back to LLM developers. No matter the resistance—including in the article above—the capex has turned long-duration, and many downstream manifestations will be with a lag.

If we are wrong, and LLMs are nothing but language models for guessing the following word (our website geninnov.ai has hundreds of evidence on why they are not), the $600bn Dollar question has more things to mull over than cyclical business correction for the current winners. However, the real question is whether LLMs herald a revolution more than in chatboxes, and if one agrees with that, the answers of where the bubbles might be are different from where people are looking.

Related Articles on Innovation