AI - the term that means too much, and hence too little
Nilesh Jasani
Β·
May 27, 2023

Ever since the term "Artificial Intelligence" was born in the 1950s, what it has meant is an almanac of our technological progress! Currently, it is an umbrella with the ability to shelter anything from machine learning, neural network, or data analytics for non-technologists as well as experts.

Most people feel they know what AI is for a long while and cannot find the energy to spend time understanding what changed last year for some to herald an innovation as big as the advent of the Internet. One group wants to dismiss the excitement by expressing disdain for what they see as the new round claims of "this time is different." The Carpe Diem-ers are spending time re-dressing the ongoing projects or whatever understanding they had of "AI" some quarters ago in new lingo to keep claiming "We do AI"!

Let's focus on the evolution of the term AI to understand why most of us are talking past each other without anything concrete while using this phrase:
β€’ 1950𝐬 - 1960𝐬: π‹π¨π π’πœ-𝐁𝐚𝐬𝐞𝐝 π€πˆ: In its earliest days, the AI meant creating rule-based systems to mimic human problem-solving skills in a narrow and specific context. In these days of "symbolic AI", the belief was that by manipulating symbols logically, one could attain intelligent behavior like in our mathematical theorem proofs.
β€’ 1970𝐬 - 1980𝐬: π„π±π©πžπ«π­ π’π²π¬π­πžπ¦π¬: In these days, AI was a synonym for "expert systems" - programs mimicking domain-specific experts' decision-making abilities, like medical diagnosis. These systems relied on "knowledge bases" and an "inference engine".
β€’. 1980𝐬 - 1990𝐬: 𝐌𝐚𝐜𝐑𝐒𝐧𝐞 π‹πžπšπ«π§π’π§π : A significant shift happened with the advent of machine learning (ML), where systems could "learn" from data. Techniques such as decision trees, neural networks, and support vector machines marked a move away from symbolic AI towards a more statistical, data-driven approach.
β€’. π‹πšπ­πž 1990𝐬 - 2000𝐬: πƒπžπžπ© π‹πžπšπ«π§π’π§π : The explosion of digital data and increased computational power led to more intricate methods, with many carrying the tag of deep learning. Multi-layer neural networks - of course, called AI - heralded new methods (convolutional, recurrent) with substantial progress in image and speech recognition.
β€’ 2010𝐬 - 𝐩𝐫𝐞𝐬𝐞𝐧𝐭: π†πžπ§πžπ«πšπ₯ π€πˆ, π“π«πšπ§π¬πŸπžπ« π‹πžπšπ«π§π’π§π  𝐚𝐧𝐝 π“π«πšπ§π¬πŸπ¨π«π¦πžπ«π¬: AI in recent years often engulfed concepts such as transfer learning (applying knowledge learned in one task to another related task) and transformers (a model architecture at the heart of natural language processing).

This shifting definition of AI reflects not just technological progress but also our evolving understanding of what constitutes intelligence. What happened last year when generative networks or large language models began producing human-like results is a giant step partly because of the light it shines on our own complex cognitive system. A bit more on this at http://bit.ly/425djlY ai

Related Articles on Innovation