www.apress.com

26/08/2019

Why Understanding AI History is So Important

by Tom Taulli


Artificial intelligence is a 50-year overnight success story. Let’s take a look at some of its history.

Academics and government support were critical.  While it's tough to pinpoint the origin of AI, one of the earliest examples is from the mid-1930s. Alan Turing wrote groundbreaking papers about intelligent computers, and would eventually come up with his famous Turing Test.

Another significant historical point is 1956, when John McCarthy setup a ten-week research project at Dartmouth University. In fact, he was the first person to come up with the phrase “artificial intelligence". The project also included academics like Marvin Minsky, Nathaniel Rochester, Allen Newell, O.G. Selfridge, Raymond Solomonoff, and Claude Shannon.  All would be instrumental in the formation of AI.

Private companies initially had little interest in the field, one reason bring the fear of being blamed for job losses as a result of automation. As a result, the U.S. government took a major role in AI, especially with DARPA, by providing substantial funding for universities like Stanford, MIT, and Carnegie Mellon. There was a major focus on technologies for the space race and the Cold War, so the federal government was open-minded for the era. This resulted in the sprouting of many innovations, some prescient examples of which include the autonomous robot called Shakey, a chatbot called Eliza, programs to play chess against a computer, the creation of time-sharing (which led to the emergence of the Internet), and basic forms of computer vision and voice recognition.

Perhaps the most important trend in AI during the past decade is the rise of deep learning. It has proven to be quite effective in deciphering patterns that humans alone cannot. Because of deep learning, we have seen major developments across industries like healthcare, transportation, energy and financial services. The early development of this technology goes back to the 1950s with Frank Rosenblatt who created the concept of the perceptron, a single-layer neural network. The computing power was not sufficient to provide strong results, however, so Rosenblatt received pushback.  In 1969, Minsky and Seymour Papert wrote a book called Perceptrons, which was a relentless takedown of the perceptron.

For the next decade or so, this concept faded into obscurity. But researchers like Geoffrey Hinton saw the importance of the perceptron and revived it in the late 1970s. Hinton realized that, over time, Moore’s Law was on his side and that computer systems would eventually be able to utilize deep learning.

Upon the 1970s, the U.S. government significantly pulled back its AI support. This was due to problems with the economy, and AI falling short of the hyped expectations. This was considered an AI winter.

This proposes a question for our modern era: Might we see another AI winter, and soon? It’s tough to predict. But, this time around, AI has provided significant innovations as well as revenue-generating opportunities. It also helps that mega tech operators like Google, Microsoft, and Facebook see AI as strategic to their own businesses, and have invested billions in pushing the boundaries of innovation for the future of artificial intelligence.


About the Author

Tom Taulli has been developing software since the 1980s. In college, he started his first company, which focused on the development of e-learning systems. He created other companies as well, including Hypermart.net that was sold to InfoSpace in 1996. Along the way, Tom has written columns for online publications such as businessweek.com, techweb.com, and Bloomberg.com. He also writes posts on artificial intelligence for Forbes.com and is the advisor to various companies in the AI space. You can reach Tom on Twitter (@ttaulli) or through his web site (www.taulli.com).

This article was contributed by Tom Taulli, author of Artificial Intelligence Basics.