Science fiction popularized the concept of artificially intelligent robots throughout the first part of the twentieth century.
It started with the “heartless” Tin Man from the Wizard of Oz and continued with the humanoid robot in Metropolis that impersonated Maria.
By the 1950s, a generation of scientists, mathematicians, and philosophers had culturally adopted the concept of artificial intelligence (or AI).
Alan Turing, a young British polymath who investigated the mathematical possibilities of artificial intelligence, was one of these individuals.
Why can’t machines solve problems and make decisions the same way people do? According to Turing, humans use available information as well as reason to solve problems and make judgments.
1950 study in which he described how to develop intelligent machines and how to assess their intelligence, followed this approach.
Making the Pursuit Possible
Alas, words are cheap. What prevented Turing from getting to work right away?
First, computers had to undergo a fundamental transformation.
Prior to 1949, computers lacked a critical requirement for intelligence: they could only execute commands, not store them. To put it another way, computers might be told what to do but couldn’t remember it.
Second, computation was prohibitively costly.
Leasing a computer in the early 1950s could cost up to $200,000 per month.
Only prominent colleges and large technology firms could afford to take their time exploring these unexplored waters. To persuade funding sources that machine intelligence was worth pursuing, a proof of concept as well as support from high-profile persons were must have.
The Conference that Started it All
Allen Newell, Cliff Shaw, and Herbert Simon’s Logic Theorist provided the proof of concept five years later.
The Logic Theorist was a program developed by the Research and Development (RAND) Corporation to simulate human problem-solving abilities.
They presented at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) led by John McCarthy and Marvin Minsky in 1956. It is the first artificial intelligence program. McCarthy, envisioning a big joint effort, convened a historic conference that brought together prominent experts from many fields for an open-ended discussion on artificial intelligence.
Unfortunately, the conference fell short of McCarthy’s expectations; people came and went as they liked.
Despite this, everyone believed that AI is possible.
The importance of this event cannot be overstated, since it ushered in the following two decades of AI development.
Roller Coaster of Success and Setbacks
AI grew in popularity from 1957 to 1974. Computers grew quicker, cheaper, and more accessible as they could store more data. Machine learning algorithms developed as well. And individuals became better at determining which method to use for given task.
Early prototypes, such as Newell and Simon’s General Problem Solver and Joseph Weizenbaum’s ELIZA, showed promise in problem solving and spoken language interpretation, respectively.
These achievements, together with the support of renowned researchers (namely, the DSRPAI attendees), persuaded government bodies like the Defense Advanced Research Projects Agency (DARPA) to fund AI research at a number of institutions. In general the government had especially interest in a machine that could transcribe and translate spoken language and analyse large amounts of data quickly.
Breaking through the AI fog presented a pile of challenges. The largest problem was a lack of processing capability to perform anything useful.
To communicate, for example, one must know the meanings of a large number of words and be able to comprehend them in a variety of contexts. “Computers were still millions of times too feeble to display intelligence,” said Hans Moravec.
Better Times are Coming
Surprisingly it turned out that the fundamental restriction of computer storage, which had held us back 30 years before, was no longer an issue.
Moore’s Law, which states that computer memory and speed double every year, had finally caught up with.
It provides some insight into the AI research roller coaster: we saturate AI’s capabilities to the level of our existing computational capacity (computer storage and processing speed), and then wait for Moore’s Law to catch up.
Artificial Intelligence is Everywhere
We currently live in the “big data” era, in which we have the ability to collect massive amounts of data that are too large for a single person to process. The application of artificial intelligence in this area has already proved highly fruitful in technology, banking, marketing, and entertainment. We’ve seen that large data and massive computers just allow artificial intelligence to learn by brute force, even if algorithms don’t improve much. Moore’s law may be slowing down a little, but the increase in data hasn’t slowed down at all.
Breakthroughs in computer science, mathematics, or neurology could all be used to break past Moore’s Law’s ceiling.
So let’s imagine having a fluid dialogue with an expert system, or having a chat in two languages that is being translated in real time. In the next twenty years, we can also expect to see self-driving cars on the road (and that is conservative).
In the long run, the goal is general intelligence, or a machine that can perform all tasks better than humans.This is similar to the sentient robots that we’re used to seeing in movies. It appears impossible to me that this could be completed in the next 50 years.
Hence, even if the capability exists, ethical concerns would serve as a significant impediment to implementation. However, for now, we’ll let AI improve and run rampant in society.
For related topics: www.101toolbox.com