A Brief History of Computers #2
This is the second of a three-part series on the history of computers made by CTS. The goal is to give an idea on why computers are the way they are nowadays This time: the origins of the ideas about AI and AI technology, and how we use it now.
Once upon a time in a land far, far away, a sculptor fell in love with his own creation. He longed for her so much the gods (all Olympians, at the time) felt sorry for him and breathed life into the statue. The ivory object suddenly turned into a thinking, loving woman and they lived happily ever after.
Humans have been thinking about artificial intelligence long before it was even a remote possibility. The thought of a lifeless object showing human traits was fascinating, and as the capacities of machines slowly evolved (see: the history of computers #1) so did, in a somewhat parallel world, the ideas about thinking machines.
Creating the next Frankenstein
For a long time, humans were very focused on whether it was possible to “make” a human or not. It was what people knew from science fiction, and now the computer thing was finally happening they wanted results. But there was no magic trick to turn a logical, metallic machine into a thinking human of flesh and blood, and as computers slowly developed humanity had to make peace with the thought that we wouldn’t be capable of making a Frankenstein monster anytime soon. Instead of creating a new human in all its immense complexity the goal now narrowed down to a very particular trait of humanity: recreating human intelligence.
There was a strong belief among the scientific community that every human action, if described in enough detail, could be reduced to mechanical processes. This thought had its roots in philosophers like Descartes, Thomas Hobbes (“what else is intelligence but reckoning?”) and later Heidegger, and now that computers in the 40s were indeed performing formerly human tasks like calculations this idea was strengthened: the goal of artificial intelligence would not be the creation of a human but the creation of an intelligent machine.
This conclusion turned out to be no conclusion at all. Striving towards intelligence was a nice thought, but what actually was intelligence? And how much of it did a machine have to show to prove to be intelligent? They had machines now that could perform calculations that till then only very “intelligent” people could do, but at the end of the day that was just following instructions. Were the intentions and decision processes behind showing intelligence important too? How do you strive towards something that isn’t even clearly defined?
This discussion was put on a sidetrack in 1950 by Alan Turing, a pioneer in the field, with his famous paper on “The Imitation Game”. Turing simply stated there was not much point in debating what intelligence was. If at one point a machine was made capable of imitating a human that would already be enough proof of intelligence. And so he came up with a test, the Turing Test that no machine has passed yet. The idea was that a machine would communicate with a certain number of humans, and if it managed to convince 70% of them that it was indeed human it would pass the test. 6 years after publishing this paper, in 1956, the research field of Artificial Intelligence would officially be launched during the Dartmouth Workshops.
But how did they want to “teach” machines to do things in the first place? Imagine you have a machine, with (just like in #1 of this series) input and output. The goal is to find the most effective way of creating the desired output depending on the input.
These were the two ways Alan Turing envisioned to accomplish this:
Imagine a computer has a 100 ways to get from input x to output Y, and the ideal path to get from input to output might differ depending on the input. The computer, logically, will go try out these paths one by one. Alan Turing’s idea was to give the machine positive output if the path was effective and negative if it wasn’t. This way the machine would slowly learn which input needed which path to get the best output.
Another way to get from x to Y effectively is by showing the computer a very large sample of ideal routes. This way the machine learns by example: it will also learn which kind of input needs which path for the most effective output. As is logical, and this continues to be an important thing in machine learning, the larger the sample, the better the outcome.
The way we train our machine learning models nowadays is still in some way based on these principles, although further developed.
Artificial Intelligence, or machine learning, now
Till now we’ve been talking about intelligent machines, because that’s how the ideas around Artificial Intelligence started. Nowadays that doesn’t really apply anymore though. If you have a robot that shows some kind of intelligence it isn’t the robot that is intelligent, it’s the software within it. And machine learning models are used everywhere nowadays. From the suggestions you get on Amazon to Gmail filtering your spam messages out: it’s all machine learning (check this article for more exciting examples).
Intelligence has since been defined by professor Linda Gottfredson as “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.” Or, for a machine: doing all those things without having a program tell them what to do step by step. Scientists working with AI in the middle of the 20th century were overly optimistic about getting to that point with machines, but the road to intelligent machines was a bumpy one. In the 70s scientists could be found saying machines would be as intelligent as humans in 8 years, and considering those promises failed to deliver it’s no wonder we had to go through several “AI winters” to get where we are now.
So where are we now? The Turing Test hasn’t been passed (not officialy, at least). We have a lot of computers that are incredibly smart in one particular area. Well known examples are, eg, the first chess champion being beaten by a computer in 1996, and the first go champion (a far more complicated game and therefore quite a breakthrough) in 2017. These are narrow AI applications: they only have one task and they are very good at it. The main narrow AI technologies nowadays is Machine Learning, that true to its name uses different ways to “learn”. People had been thinking about ways to train their computers long before though.
Real, general artificial intelligence
So where’s all this going? Scientists spend the 60s and 70s telling everyone how quickly we’d have an AI on human level, and we’re still not there. But people are saying it again, and this time it does look like we’re on the verge of an AI breakthrough. AI will get smarter, and very quickly. We’ll start combining different narrow AI’s, and at one point (some say already in 40 years), we might have a computer showing all signs of human intelligence, and shortly after (way) more; a “machine” that looks at us as we look at mouses in terms of intelligence. We simply don’t know what will happen then. What we do know is that machine learning offers us a way to augment human capabilities in a way that we never could before. More on that in the next part!
CTS is a Google premier partner that helps clients with big data, application development and yes, also machine learning solutions in Google Cloud. We’re an organization working with the newest technologies but are aware of where they come from. This is the second part in a three-part series on the history of computers made by CTS You can read more on AI here in this article by WaitButWhy.
An important source for this article was Andrew Hodges’ Alan Turing: The Enigma of Intelligence. It goes very deep into Alan Turing’s mathematics and is a recommendation for anyone interested in computers, mathematics, artificial intelligence and gay history. You can also read more on AI and its future in this article by WaitButWhy.