Documenting the Coming Singularity

Sunday, April 29, 2007

Superintelligence: Point of No Return?

Many people at the forefront of artificial intelligence research and development are quite certain that machine intelligence will attain human level intelligence very soon, within two decades to be specific. They are also confident that, once that point is reached, and once machine intelligence has access to its own code and is therefore able to build upon it, it will quickly become superintelligent. Then all bets are off.

The blog Accelerating Future has an excellent article about superintelligence and how such an intelligence might view humanity. Consider how we view the technology of early man.
For example, consider the world from the viewpoint of a Homo erectus. They had tools - handaxes. These tools were of various types - pointed, cordate, ovate, ficron and bout-coupé shapes, cleavers, retouched flakes, scrapers, and segmental chopping tools. Flint, basalt, chalcedony, quartzite, andesite, sandstone, chert and shale were all used as raw materials to build these axes. Some were very large and probably just ornamental. Some were discus-shaped and possibly used as hunting weapons. It is thought they also had a social role, with enterprising Homo erectuses fashioning better tools for greater peer approval. From the viewpoint of one of these guys, they had command over a remarkable number of handaxe forms and designs, and put them to use for a variety of different purposes.

From the viewpoint of an intelligence smarter than us in the way that we’re smarter than Homo erectus, all our technology, from planes to trains to lamps to sinks to nanotubes to satellites to linear accelerators, probably looks like the same variants on the basic handaxe. Our descendants or future selves will not look back on us admiringly, and say, “golly gee, these guys were so clever that no leap in intelligence ever happened that bested the difference between them and their immediate predecessors!” They will not be genuinely impressed with what we are doing, any more than we are genuinely impressed by a pre-Neolithic hand axe. If we were to show them our greatest technological achievements, they might pretend to be genuinely impressed, so as not to hurt our feelings, but really, they’d probably be daydreaming on the side about mechanisms of such complexity that no aggregation of human beings, no matter how numerous or intelligent, could ever make sense of it all.
This is quite rightly a frightening prospect, is it not? As the article points out, comparing our intelligence to that of a self-improving superintelligence is not a matter of comparing yourself to Einstein. He was certainly smarter than most of us, but that's much too small a difference to be useful here.
I believe that a lot of Singularity skepticism derives from people who don’t get that we’re not the highest form of intelligence that the universe permits to exist. Being a computer science poindexter sometimes hurts more than it helps, because such people are accustomed to being the smartest ones in the room, making it all the more difficult to imagine an intelligence that not only blows them out of the water quantitatively, but can think thoughts they can’t think, even in principle. When people say, “oh, we’ll be able to fight the superintelligent AIs with our rebel guerilla group!”, or “we’ll nuke it to smithereens if it disobeys!”, they don’t get that, once it’s smarter than you, you’ve already lost. Once you’re dealing with something genuinely smarter than human, you have to rely on the hope that it doesn’t want to hurt you, not the assumption that your crappy “foolproof safeguards” will do a lick of good against a true superintelligence.
Again, a frightening prospect. But I see it a more benign future, if we are careful. As Ray Kurzweil explains in The Singularity Is Near: When Humans Transcend Biology, it does not have to become an us/them situation, because they will be us. Intelligent machines will be built using the architecture of the reverse-engineered human brain. Human minds will be instantiated into machine substrates. Human brains will be augmented with computer modules (they already are, cochlear implants are an example). All these factors imply that the superintelligences will be evolved human intelligences, and will not be inclined to suicide.

If you enjoyed this post, take a few seconds of your time to subscribe to our RSS feed. Barry's Best is updated daily.

0 comments :