The Sustainability Worriers, perhaps the most pertinent Cassandras of our time, raise multiple valid concerns about the unchecked advancement of GenAI. Their worries span a wide spectrum, from environmental impact and job displacement to ethical dilemmas and societal risks. An era in which we cannot understand our machines' innovations will be fraught with all the risks that have been themes in dystopian sci-fi stories since time immemorial.
“If I do not do this, the person I hate the most will” is the classic prisoner's dilemma choice set in front of every major corporate and nation. The fear of falling behind in the global race for AI dominance has led to unprecedented government support and corporate investment in GenAI, associated semiconductor manufacturing, and subsequent automation-driven application areas. This unparalleled investment surge, fueled by geopolitical tensions and the potential for AI in defense, has led to the same people discussing negatives and advocating control from the stage, signing the most aggressive advancement programs backstage.
Despite the growing chorus of concern, there have been few obstacles from policymakers so far, with tired and time-invariant discourses no longer drawing crowds at the conferences because they are failing to keep pace with the rapid advancements in the field. High-profile incidents like the CrowdStrike outage (which was not due to GenAI but highlighted the risks from heavy tech dependency and rising interconnectedness), the abandonment of carbon emission targets by large tech companies, and the development of next-generation weapons highlight that the risks are more than real. The sharp decline witnessed in white-collar job opportunities for new graduates worldwide could debatably also be a result of GenAI's progress.
The lack of significant policy responses thus far doesn't negate the possibility of future regulatory interventions, significantly if GenAI disrupts employment on a larger, more persistent, and definitive scale. The growing power of tech giants, coupled with the increasing dependence on AI, could trigger unforeseen regulations aimed at addressing the imbalance and mitigating risks.
Corporate inequality is drawing significant investor ire in many equity markets. Supporting them are their usual adversaries: monopoly watchdogs. At least in the corporate world, the plight of those falling behind could incentivize regulatory action, although forced Bell-like breakups are not yet a remote possibility.
The field or companies one invests in can be exposed to sudden risks from legal cases because of privacy or liability issues, through reputational deratings, or affected by the damages inflicted by models or products we do not understand. These extraordinary risks are not as improbable to explode as they may appear in a bull market.