Documenting the Coming Singularity

Saturday, April 07, 2007

Artificial Intelligence: Past, Present and Oh Damn.

I've been working on a research paper for my MBA class Management Information Systems, and by jove, I think I've got it. I'm going to share it with you, my beloved readership. Don't worry, it's not too long, and it's fascinating.

I. AI's Past

The timeline of AI's history and development begins in the 1950s (Figure 1). The term "artificial intelligence" was coined by John McCarthy (Figure 2) of Massachusetts Institute of Technology and Stanford University, winner of the prestigious Turing Award (1971) and Benjamin Franklin Medal in Computer and Cognitive Science (2003) (Wikipedia). Although a standard definition is elusive, according to D. Marr of MIT, it is "the study of complex information processing problems that often have their roots in some aspect of biological information processing" (Marr, 1977). AI, as defined by computer scientist Elaine Rich, is "the study of techniques for solving exponentially hard problems in polynomial time by exploiting knowledge about the problem domain" (Kurzweil, 2005).The most fascinating goal of AI development, however, is the achieving and surpassing of human intelligence.





Alan Turing, who is often given the title of "father of modern computer science," invented the "Turing test," wherein he proposed a test to determine a machine's capacity for human intelligence. As described in his 1950 paper, "Computing Machinery and Intelligence," he suggested that a human judge would participate in a natural language conversation with two other participants, one human and the other a machine. If the judge cannot tell the two apart, then the machine can be said to have demonstrated human-like intelligence (Turing, 1950).

Probably AI's most famous icon is HAL, the paranoid machine intelligence in the movie "2001: A Space Odyssey." Entering the public consciousness when this motion picture debuted in 1968, AI became a popular idea and subject of much media hype in the following years.

II. AI's Submergence

When the reality failed to live up to the hype due to the media's (and thus the public's) misunderstanding of the time frames involved, AI sank beneath the turbulent surface of public awareness, although never disappearing from the interest or pursuits of scientists and computer engineers, during the 1970s and 1980s. According to Rodney Brooks, Director of the MIT AI lab, "There's this stupid myth out there that AI has failed, but AI is everywhere around you every second of the day. People just don't notice it" (Talbot, 2002).

While the public lost sight of it, AI was being developed and refined. Programming languages and software were being designed and computing substrates and architectures were being built that would allow for powerful AI systems to be introduced in the fields of business, medicine, weather and the military. Slowly but surely, AI has been quietly establishing itself into human society to the point that it now provides benefits to virtually every person, even though we are mostly unaware of its ubiquity.

During its decades' long hiatus from the public consciousness, AI has developed into four distinct types of computing systems: Expert Systems; Neural Networks; Genetic Algorithms; and Intelligent Agents. (Ray Kurzweil calls these A.I. applications "narrow AI, as opposed to "strong" AI that exceeds human levels of intelligence, which he predicts will arrive in the 2020s, a technology that benefits from Kurzweil's "law of accelerating returns.) These systems are designed to perform in human brain-like or evolutionary ways, but with the massively-amplified speed and power of machine substrates.

Expert Systems: Expert systems mimic, in amplified form, the ability of human experts to make decisions and recommend courses of action based on answers to a large number of questions, and when there is not always a single "correct" answer. The most sophisticated systems are capable of performing evaluations based on real-world uncertainties. One of the most fundamental advantages of an expert system over a human expert is the relative ease with which its expertise can be transferred to other machines, when compared to the years for training necessary to "create" additional human experts.

Neural Networks: Neural networks are able to mimic the massively-parallel processing power of the human brain by modeling programs on the cortical structures of the brain. Just as the human brain is especially effective at pattern-recognition, so neural networks are useful in such fields as natural language and facial recognition.

Genetic Algorithms: Genetic algorithms imitate the power of evolution, but whereas biological evolution requires enormous spans of time to produce results, these systems are able to accomplish millions of iterations or recursions in the blink of an eye. They are able to try out millions of possible solutions to problems that have a large number of variables (for example, finding the most efficient configuration of a jet engine) and find the most effective solution much more quickly than could humans.

Intelligent Agents: Intelligent agents mimic the human brain's ability to adapt and learn. The more sophisticated of these programs are sometimes called autonomous intelligent agents, a term which gives a sense of its ability to act independently from human involvement. Intelligent agents learn from and adapt to their environments. One of the most commercially successful applications of IAs is data mining, in which software programs operate in data warehouses, discovering information and connections between pieces of information that might be useful for human managers to know about.

III. AI's Resurgence

Kurzweil laments that as soon as an AI application is successfully deployed, it is no longer called AI, but is "spun off as its own field." In spite of this phenomenon, AI continues to surge ahead and will resurface more and more into the public's awareness. Some of the most difficult problems currently being tackled by AI:

Protein Folding: Improperly folded proteins can be, an often are, fatal, because the shapes of proteins are intimately associated with their structure. The difficulty lies in the fact that simulating 3-dimensional protein folding is a massive processing task (IBM estimates that Blue Gene's level of performance is "sufficient to simulate the folding of a small protein in a year of running time") (Figure 3). According to the IBM Blue Gene Team, launched in 1999 as a five-year effort to build a massively parallel computer to study biomolecular phenomena such as protein folding: "The mission of the Blue Gene scientific program is to use large-scale biomolecular simulation to advance our understanding of biologically important processes, in particular our understanding of the mechanisms behind protein folding. Increased computational power translates into an increased ability to validate the models used in simulations and, with appropriate validation of these models, to probe these biological processes at the microscopic level over long time periods" (IBM, 2001).



Missile Guidance and UAVs: As our society becomes less and less tolerant of civilian casualties in times of war, precision targeting of warheads becomes more and more important to military organizations. Additionally, our culture is becoming less tolerant of military casualties among its own forces, leading to a high value being placed on systems that can send precisely targeted missiles from hundreds or even thousands of miles away and unmanned aerial vehicles (UAVs) that operate autonomously. Included here is a chilling quote from Gary Chapman for generation5's artificial intelligence repository: "Autonomous weapons are a revolution in warfare in that they will be the first machines given the responsibility for killing human beings without human direction or supervision. To make this more accurate, these weapons will be the first killing machines that are actually predatory, that are designed to hunt human beings and destroy them."

IV. AI's Future

Ray Kurzweil estimates that machines will achieve and surpass the complexity, and the intelligence, of the human brain within a few decades (Kurzweil, 2002). Kurzweil envisions a near-term future that involves interfacing, or perhaps more accurately, merging biological and machine intelligence. In a very real sense, this is already happening with cochlear implants and devices that allow the blind to see.

With the addition of nanotechnology, he sees nanomachines being designed to interact directly with human neurons, augmenting human memory and processing capabilities and allowing for full-immersion virtual reality, wherein there would be no subjective difference between virtual and real experiences.

Shortly after strong AI is achieved, machines will, according to Kurzweil, take over their own development, at which point the progress of machine intelligence will accelerate drastically. Eventually, says Kurzweil, biology will give way to more durable and powerful substrates, until human intelligence will be machine intelligence. Will it still be called "artificial"?

If you enjoyed this post, take a few seconds of your time and subscribe to our feed! The Price of Rice is updated daily!

0 comments :