A Brief History of Artificial Intelligence

Lorenzo Carnevale
5 min readMay 13, 2024

--

Since late 2022, Artificial Intelligence (AI) is the leading topic among the Information and Communication Technologies (ICT). It looks like the human being could not exist without AI. From other perspective, it looks like the human being will not exist because of AI. However, it is difficult to talk about a topic if we do not know its history.

Photo by Eugene Zhyvchik on Unsplash

The history of AI started many years ago. While in Europe the II World War was going on, Warren McCulloch and Walter Pitts were working on the first concepts of AI as the knowledge at the basis of the physiology, related with the function of neurons in the brain. They proposed a model of artificial neurons, which could be activated (on/off logic) following a stimulus received by a certain number of adjacent neurons. This inspired the work of Marvin Minsky and Dean Edmonds, Harvard students, that built the first neural network computer in 1950.

Photo by Preethi Viswanathan on Unsplash

In the same period, when the award-winning Ben-Hur was directed by William Wyler, Alan Turing made his initial contribution to AI, named Turing test. It was thought as an experiment that aimed to provide answers to a debated question: “is a machine capable of thinking?”. The Turing test theorized that a computer can pass the test if a human examiner, after asking the machine questions, is unable to understand whether the answer comes from a person or not. Such a machine would interpret natural language, represent knowledge, reason and learn automatically. Despite everything, such a machine would not have the capability to interact with objects and humans. Turing was a British mathematician, logician, cryptographer and philosopher. When he was alive, he was mistreated because of his ideas and sexual orientation, but today he is considered one of the fathers of computer science and one of the greatest mathematicians of the 20th century. The Nobel Prize for computer science takes his name and it is called Turing Award.

Photo by Mauro Sbicego on Unsplash

A further push was given in the same period by John McCarthy (Dartmouth College), who convinced Minsky, Claude Shannon and Nathaniel Rochester to bring together American researchers interested in the theory of neural networks. This gave rise to a coordinated research that brought together about 10 of the best minds of that moment and led Newell and Simon to produce a system for autonomously proving mathematical theorems, called Logic Theorist.

Photo by Wei Zeng on Unsplash

In 1952, Arthur Samuel (IBM) developed a program for solving the game of checkers by using reinforcement learning. It was the first tentative to prove that software could learn to play better than its creator. The interest around AI growth up, so that, in 1958, John McCarthy defined the Lisp programming language, destined to become the point of reference for AI in the following 30 years.

Photo by Tai's Captures on Unsplash

Despite the great popularity between 1950s and 1960s, AI suffered from poor computing power. This led to genetic programming experiments, based on the idea that by making a series of small changes to a program, it would be possible to generate another with better performance. Unfortunately, this idea did not bring satisfactory results at that time, causing the British government’s decision to cut funding dedicated to AI in almost all universities. When in London was born the famous rock band Queen, in 1971, Feigenbaum’s studies brought back interest in the sector with a new technology called expert systems, which became popular in businesses in the 1980s. However, this was not enough. The “AI Winter” was coming as result of failing business investments. The expectation around AI was too high.

Photo by Bob Canning on Unsplash

The AI community shifted back to a more scientific approach, based on statistics and machine learning. Benchmark problems were created for facilitating the comparison with future AI solutions. In 1988, Rich Sutton related reinforcement learning with the Markov chain, providing relevant applications in robotics.

Photo by Eric Krull on Unsplash

The real change for AI was the development of the World Wide Web (WWW). Enormous quantities of data (datasets) were created and this lead to a well-known phenomenon, called big data. New algorithms were designed to address the issue related with an excessive amount of “unlabeled” data. The availability of big data has given new attractiveness. LeCun, Bengio and Geoffrey Hinton developed the concept of deep learning, that brought AI to the levels of interest we experience today. Such a system improved the ability to classify objects from thousands of available images. Another noteworthy example is the construction of AlphaGo, a software capable of beating the (human) world champion of Go.

Photo by Jimmy Conover on Unsplash

Nowadays, we are experiencing the disruptive trend of the generative AI. OpenAI released ChatGPT in November 2022 with the intent to share with the humanity a powerful chatbot. The intent is moving steps in the direction of the Artificial General Intelligence (AGI). We still do not know if the expectation will be respected, but we could say that ChatGPT is quite able to pass the Turing test.

Disclosure

The fact described in this blog post are the result of a personal study. This does not pretend to be exhaustive. If you read errors or missing facts, please report them to me.

References

  • Artificial Intelligence: A Modern Approach, 4th, by Stuart Russell and Peter Norvig.

--

--

Lorenzo Carnevale

Assistant Professor @unimessina, #algorithms lecturer 👨🏻‍🏫 active in #DistributedAI @fcrlabunime 🤖 looking at #swarm solutions 👨🏻‍💻 #BePositive 🕊