AI investments through public versus private markets
Nilesh Jasani
·
June 24, 2023

Recent reports have drawn attention to Google's internal advisory regarding the potential privacy risks of using language models, including its own. This revelation should catalyze every significant nation and corporation, inciting them to intensify their efforts in developing proprietary foundational AI models.

In simple terms, creating a foundational model involves training neural network-based programs on extensive datasets. This process equips the model to comprehend and interpret the data. Once a foundational model is established, it can be fine-tuned - irrespective of whether it was developed in-house or accessed through an API - to perform a diverse range of applications.

In even simpler terms, as we transition into the era of Generative AI, corporations and countries, essentially have three options:

1. Continue focusing on existing projects in data science, machine learning, and related AI fields while disregarding the transformative potential of generative AI and transformer models.
2. Acknowledge the paradigm shift in AI brought about by recent developments, and seek partnerships with entities that have developed language models or other foundational models. This approach provides a quicker path to leveraging AI, but it involves entrusting sensitive data to software developed by third parties.
3. Opt to build proprietary software from the ground up, using owned or rented hardware and essential programming language tools.

For any serious business or nation, relying on an external foundational model for mission-critical applications is fraught with inherent risks. Unlike operating systems, foundational models are not designed with robust privacy protections, making their use potentially problematic when safeguarding sensitive information.

Furthermore, constructing a foundational model does not appear to be as complex or resource-intensive as establishing a semiconductor fabrication plant, developing a smartphone, or creating an operating system. In a previous post (https://bit.ly/3X6jfKA), I provided several examples to dispel any misconceptions about the difficulty of creating a foundational model. Nvidia CEO Jensen Huang has also stated that custom language and foundational models can be developed with a budget of 10-30 million dollars.

Of course, there are challenges to overcome. Computational power is a significant bottleneck, and finding the right engineering talent can be difficult. However, it seems that many large entities currently invest significant time and resources in securing partnerships for popular model APIs, are not ambitious enough. Perhaps they should follow Elon Musk's example and start their AI strategy from scratch. This approach is particularly relevant for major IT services and corporations in Asia. By developing their foundational models, these entities can maintain control over their data, mitigate privacy risks, and kickstart their own cycles of innovation on a global scale.

Related Articles on Innovation