The third set of Cassandras sees no major technological leapfrogging through GenAI. Some obsessively focus on the errors or what the models still cannot do. It does not matter how many new unforeseen innovations happen every week in tens of realms; these pessimists, including dogmatic philosophers, relish pronouncements like "machines can never develop consciousness/AGI." These unfalsifiable claims about ill-defined terms are not the domain of our post. The idea is to understand the risks we need to watch while learning from them.
Heuristical methods—i.e., math or methods that cannot be explained logically—have raised hackles before. About 100 years ago, we had the first major blush with something that just works: quantum physics. Right from the start, it faced debates and skepticism, including statements like “God does not play dice,” and they continue even today. For most practitioners, it became time to "shut up and calculate" because of its practical applications.
Transformers are on a comparable journey. All are puzzled over whether machines can reason or how models work. While the "explainability" debate rages, fueled by political anxieties, the enterprising ones are giving rise to a new industry working on the illusion of explanation. The reality is that trillions of calculations can't be explained in simple human terms; definitionally, heuristical methods are beyond our language comprehension.
Episodes like recent outages validate fears about our dependence on systems we can't explain. Yet, as we'll discuss in our fourth Cassandra, these weighty issues make for great dinner conversations or conference speeches for most practitioners but don't deter the "shut up and calculate" ways.
The downside of heuristical methods is that they work until they don't, and their validity is result-based and utterly unpredictable regarding the future course. Our inability to understand deep hidden flaws that may cause grave damage later through the development of drugs or products that have delayed harmful effects is a genuine concern. These potential risks are critical to acknowledge and address, as they can lead to significant consequences if left unchecked.
One approach to mitigate these risks is pitting models against each other for verification and risk assessment, leveraging their capabilities to uncover flaws humans might miss. There will undoubtedly be attempts to halt these technologies altogether, leading to unpredictable policy paths and potential stifling of innovation.
Heuristical, unpredictable math is leading to an equally unpredictable policy and development future ahead, which is a risk we all have. However, there is one thing predictable about the skeptics of this genre: their focus on what is still not achieved.