Funsearch - the AI black swan
Nilesh Jasani
·
December 18, 2023

Nassim Nicholas Taleb, the master of unpredictable probabilities, once said: "It takes one observation of a black swan to disprove all the generalizing statements that every swan is white." One such observation should lead to a complete restating of previous apriori probabilities.

If LLMs can do one thing that humans have not been able to in any theoretical realm, it heralds much more. It is not just the usual "there is never one cockroach" theory. As this author has said before, right now are the dumbest these models are ever going to be. Google's announcement about Funsearch is a giant step, although even this shall be forgotten in no time given the number of meaningful announcements these days.

The simple lesson, once again, is: LLMs are not about copilots and chatboxes. They are going to turbocharge innovation across fields.

But wait, aren't LLMs riddled with errors and hallucinations? Of course, they are! We have to remember that the journey of scientific discovery has always been marred with trials, errors, and gradual refinements. William Shanks' laborious calculation of Pi, later discovered to contain an error in its 528th digit, stands as a testament to not just humans making errors but also remaining undiscovered for decades and centuries. As Google's announcement gushes:

"…first time a new discovery has been made for challenging open problems in science or mathematics using LLMs…In addition, to demonstrate the practical usefulness of FunSearch, we used it to discover more effective algorithms for the "bin-packing" problem, which has ubiquitous applications such as making data centers more efficient.

The Funsearch paper's new constructions of large-cap sets going beyond the best-known ones are for the theoreticians; it is not for nothing that this was a challenge previously highlighted by Terence Tao as the favorite open problem. The LLM's success is not merely a technological triumph but a philosophical revelation. It illustrates that LLMs operate on a complexity plane that often surpasses human capacity, opening avenues for tackling theoretical and practical challenges with unprecedented efficiency and innovation.

Take, for example, the bin packing problem, a classic conundrum in optimization that has been the subject of extensive human research for centuries. The way LLMs have optimized solutions for this problem is no less revolutionary, and this time it is more understandable and with practical implications for fields like efficient data center design.

This and Tesla's Optimus Gen-2 announcements (https://bit.ly/48cOesG) are a clarion call to reevaluate our perception of LLMs. Machines' hallucinations and errors create the role of humans in the equation. Human intervention becomes more crucial than ever in guiding these models, filtering out inaccuracies, and harnessing their capabilities towards constructive ends.

One must end with a question on Google: it continues to lead in expanding the frontiers of AI - from proteins to robotics, in music, images, and even cooking. They may need their AIs to help them announce revolutionary products now. Or perhaps they, like IBM, know it is not about collecting monthly subscriptions for things that are getting commoditized quickly. Whichever way, 2024 is going to be another exciting year.

Related Articles on Innovation