When shrinkflation hits the chatboxes to turn shirkflation and leave answers a-hanging
Shrinkflation: The Auto-Rickshaw Chronicles and Idli Intrigue
Shrinkflation. It’s a word that would’ve sounded like some sci-fi phenomenon back in my college days in the 1990s. And yet, my first encounter with the essence of shrinkflation was in real life. Our college town had this quirky shared auto-rickshaw system, almost a budget limousine ferrying a crowd of students for a pittance. As Iraq War gas prices surged, our kind-hearted auto drivers didn’t hike their rates. Instead, they creatively solved the problem by squeezing more of us into the rickshaw. First five. Then seven. And soon, we were a veritable sardine can on wheels, clinging to each other and the very concept of personal space.
It was the original ride-sharing economy, minus the app and the surge pricing.
And it wasn’t just our rides: the idlis we picked up at nearby cafés got smaller every trimester, a blessing for our waistlines that our scrawny frames did not need. Back then, there was no fancy “shrinkflation” term.
Fast forward to today, and shrinkflation is as ubiquitous as selfies at a tourist spot. Open a bag of chips, and you might wonder if you've purchased a packet of air with a side of disappointment. "Family-sized" juice cartons now apparently cater to the minimalist family of two. It's everywhere, so it should not have been surprising to see it reach the GenAI land. But as always, with this field, the manifestations are worth talking about.
When AI Goes on a Diet: The Great Language Model Squeeze
Until a few months, our trusty digital companions could spin out coherent, insightful answers faster than I could say “algorithm.” Those verbose days feel like a distant utopia. Now, our silicon companions increasingly channel their inner Hemingway – minus the genius for concision.
A few weeks ago, ChatGPT would expound on a topic with the enthusiasm of a history professor on caffeine. Now, it starts strong, building up to a grand point, and then—BAM! It stops mid-sentence, leaving you hanging like a season finale cliffhanger. It's as if the AI took a vow of silence right when things were getting interesting and without easy ways to make it continue.
Then there's Perplexity, which has adopted minimalism with the zeal of a decluttering guru. It offers responses in rapid-fire bullet points that read more like a SparkNotes version, or worse like a to-do list, than an insightful answer. And Google Gemini? It's like the polite friend who tries to let you down easily, offering a 'concise summary' instead of the full story. At this rate, the days of our LLMs communicating in emojis and haikus may be pretty close.
Effectively, where once you could ask about the history of philosophy and get a dissertation, you now get: "Plato said some stuff. Aristotle disagreed. Skip to Nietzsche. The end."
So, what's causing our loquacious AIs to become the strong, silent types? There are new, awkwardly imposed (undeclared) limits on the number of tokens, not just in input but, more importantly, in the output. Each token costs computational power, and when millions of users are asking everything from "What's the meaning of life?" to "Why does my cat ignore me?" those costs add up faster than a teenager's text messages. Anecdotally, a single GPT-4 query devours enough electricity to power a small LED bulb for an hour. Multiply that by millions of users, and you're looking at a power bill that would make a Bitcoin miner blush.
To keep the virtual lights on without emptying their coffers or charging us directly, AI companies trim the fat and cut back on the “quality calories.” This shrinkflation may be a stopgap, a way to balance the books without charging us outright. But I suspect we’ll soon see pricing models based on how much information you really need. Just a simple answer? Free. Want a PhD-level dissertation? That’ll be $5, please. Just as airlines charge for extra baggage, perhaps we'll be nickel-and-dimed for every token exceeding the free tier.
Gratitude in the Age of Shrinkage
But here’s where the Thanksgiving spirit kicks in. This shrinkflation, annoying as it is, reminds us of the old adage to “be grateful for what we have”—or, in this case, what we had. In a world where shrinkflation shrinks even our answers, Thanksgiving serves as a reminder to savor the good parts, the moments when LLMs do give us that dazzlingly precise answer or inspiring insight. Chatboxes are the children of the post-free-social-media, or XaaS, era. Their pricing models will evolve further over and above the subscription costs, and we should be thankful we are not there yet this year.
As for me, I think it's time for a break—to recharge, reflect, and perhaps enjoy a cup of Kafkian...