Chip Matters Coz Chips Matter
Nilesh Jasani
·
September 23, 2024

There was a time when the heartbeat of technological advancement was synchronized between software and hardware. New semiconductor integrated circuits (ICs) didn't just enhance computing power; they unlocked entirely new realms of possibility. We are returning to a similar era, not just for data center chips but across all consumer devices.

In the late 1980s and early 1990s, Intel's 32-bit x86 processors weren't just faster chips; they were the linchpins that made operating systems like Microsoft Windows a practical reality for personal computers. To compete through its Mac OS, Apple worked with IBM and Motorola for PowerPC processors. Fast forward to the early 2000s, and ARM processors emerged as the unsung heroes that made smartphones possible.

These appear like a bygone era. In recent years, this tight coupling between new hardware and new software seemed to wane, at least in the consumer space. The latest phones, tablets, or PCs didn't offer applications that couldn't run on devices bought a few years prior. Upgrades became a matter of preference rather than necessity, except perhaps for hardcore gamers or professionals requiring cutting-edge performance.

By now, we recognize that the trend is being broken for data centers. Every Nvidia chip is being celebrated in tech circles and beyond as a necessity for the AI models to progress further. In a famous clip recently, Oracle’s Larry Ellison talked about a private dinner where he and Elon Musk “begged” Nvidia’s Jenson Huang, “Please take our money. By the way, I got dinner. No, no, take more of it. We need you to take more of our money, please.”

So far, consumer device chips have not been the subject of equivalent celebrations or anticipations. But something is changing.

Apple’s iPhone 16 announcement appeared like a dud. This is because Apple Intelligence, its AI, was not released along with it. It is coming next month. As a result of the delay, the most significant aspect was quickly ignored: Apple Intelligence will work only with the new phones and the highest-end phones released last year. That's a big change. We haven't seen widely useful features restricted to only the latest phones in years.

This is unlikely to be an exception. AI models’ complexity continues to rise rapidly. GPT’s new o1 models, aka Strawberry beta, are decidedly superior, even if not perfect. Given its efficacy and simplicity of architecture, its chain of thought is likely to be copied quickly. We expect all major LLM makers to develop their own versions of a similar model in weeks.

One main aspect of the new methods is the models’ processing-power-guzzling internal ruminations before they provide some really amazing answers. In many cases, the o1 models take minutes before arriving at answers.

The point is that while the models are becoming smaller in one sense, in the other, their complexities continue to rise as well. At the edge, the upcoming chips of Mediatek’s Dimension 9400 or Qualcomm’s Snapdragon 8 Gen 4 are as important for the popularization of AI models as Nvidia’s GB200 or OpenAI’s newest models. Clearly, these chips’ releases will not have the same fanfare as those from Nvidia, for instance. Still, their companies’ revenues and profits should also continue to surprise with each of these less-discussed critical developments.

Related Articles on Innovation