In some ways, the analysis workflow is undergoing more radical changes than observed at the arrival of calculators or spreadsheets. The following represents one of the changes for this author, and it is remotely not the most important of the use cases in investments.

Whenever I have a query that earlier would have been directed to a research professional, I pose them to multiple models these days. Opening multiple AI-assisted chat boxes each morning, akin to consulting a team of highly intelligent analysts, offers a spectrum of perspectives on any given issue. I often find myself cutting and pasting the same questions to five or more out of ChatGPT, Gemini, Copilot, Bing, Hugging Face, Anthropic, Claude, Perplexity, and Groq. They provide varied, though not always consistent, insights. But, I use their variety to verify each others’ answers and also for critiques and counterpoints.

Take the following example of a query that I posed where there is little information, and which in the earlier times may have taken even the smartest associates days to get some approximate answers. My inquiry was into the AI/ML industry's dynamics at the close of 2021, particularly with the view to understand how the entire field has pivoted to LLMs post the validations in 2022. The answers – never consistent - effectively revealed that of the estimated $300-400 billion market, approximately 40% of efforts before 2021 were in non-neural network methods of machine learning (e.g. SVMs, decision trees, regressions). Within neural networks, less than 20% of expenditure was on LLMs and post-LLM transformers. Fast forward to 2024, all chatboxes agreed to a material shift in the pie, with many ascribing 70-90% of new projects embracing the foundation technologies, relegating older methods to legacy maintenance. This is when the spending pie has increased by over 60% in these years.

In some ways, one only needs to look at NVIDIA results to understand this! The chatbox-induced learning can take absolutely unimaginable forms through summarizations and negative queries, and we will go into them in some other posts in the future. Let’s end with something different here.

As witnessed absolutely every week for over a year, the transformer models keep on giving. The same basic method, when deployed with different adjustments and data, yields results that continue to stagger its developers and biggest enthusiasts. I asked what single equation could be defined as representative of these models, to get the following:

Attention(Q, K, V) = softmax((QK^T) / sqrt(dk)) * V

The equation is not as memorable as E=mc², so we will avoid its description. But this is the equation that has provided the unimaginable mimicking of human cognitive processes and is leapfrogging far ahead.

Related Articles on Innovation