Investing in the Maelstrom of Innovation: A Guide for the Cautious Yet Ambitious
Nilesh Jasani
·
January 24, 2024

In the tumultuous realm of innovation, particularly in the fields linked to or augmented by generative AI, traditional investment wisdom, hinged on forecasting and long-term bets, falters dramatically. The attendant high and rising unpredictability demand a more nuanced approach for good risk-adjusted returns. A sensible, well-designed approach is more needed given the likely stages of excessive valuations and subsequent busts in both public and private markets in the years ahead.

For example, in the pre-generative AI era of drug discovery, less than 5% of drugs that entered preclinical trial stages reached the market, underscoring the high risk involved even for the experts involved in developing them. In this context, for external investors, achieving a higher success rate than industry experts in predicting successful drug candidates is nearly impossible without turning lucky. This uncertainty necessitates a portfolio approach for the risk-aware investing passively in the field, with the assumption that the success of a few would outweigh the many failures. However, with the advent of generative AI, the landscape is shifting even more unpredictably. The flood of new ideas generated by AI could potentially lower success rates further, at least in the short term, as the industry adjusts to this new paradigm. The positives of less time to market and more idea generation are substantial. Still, for general non-sector investors, the need to approach the sector through upstream volume beneficiaries is higher until they see actual, meaningful successes like in GLP-1.

The same principle applies across various innovation sectors, from robotics to semiconductors and large language models (LLMs). The rapid pace of change makes it challenging to predict long-term market leaders. This uncertainty requires investment strategies that are nimble, evidence-based, and always ready with an exit plan. It is worth remembering domain experts’ repeated insistence about not fully understanding what makes the models work. Making too many assumptions about a specific chat model or GPUs’ persistent high market share is risky.

The other important conclusion is to keep the “exit door” open or retain the investment flexibility in selling if the investment hypothesis suddenly changes. Even the picks and shovel plays are not secure. For instance, tremendous tension is building on how much LLM inference will occur at the core or in the cloud versus at the edge or in consumer pockets or private networks in the years ahead. Valuations of the current semiconductor and even hardware players almost uniformly assume no material shift away from cloud computing, which may or may not remain valid. The edge beneficiaries are different from the core.

Innovations should be one of the most positive macro drivers in the years ahead. It will materially widen the gap between its leaders and those who fail; its first trailer we witnessed over the last few quarters. The critical thing to recognize, however, is that the next generation of winners will have to observe their backs constantly and new winners will keep emerging from various corners all the time.

Related Articles on Innovation