Documenting the Coming Singularity

Wednesday, July 04, 2007

Domesticated Artificial Intelligence

I have been reading a reasonably entertaining sci-fi novel called Odyssey, by Jack McDevitt. It is well written and, as I said, entertaining. The fault I find with it comes down to the fact that, even taking place far into the future (23rd century?), and even though FTL flight has been invented, the rest of their technology is quite boring. Especially their AIs.

What do I mean by "boring"? Well, they are all just digital personal assistants. They are certainly not more intelligent than the humans they serve, perhaps a bit faster at certain tasks, but nothing more. They can take phone calls, remind people of appointments, and are somewhat autonomous, but nothing like what we anticipate with the singularity.

At first, the domesticated AIs in the novel seemed to me to be unrealistic for so far into the future. But then it occurred to me that they could easily be the result of a deliberate set of constraints placed upon AI development out of an abundance of caution.

Let us suppose that we cannot figure out how to allow AIs to have the ability to improve their own programming without the likelihood of them running amok with greater-than-human intelligence. We cannot find a way to make sure that super-intelligent AI will also be human- friendly. Further, let us suppose we find out that super-intelligent AIs will invariably discover a way to escape any closed system within which we attempt to keep them. All of our experiments confirm that the AIs will be able to talk their way out, simply because they are so powerfully intelligent. What would we do then?

Perhaps we might choose to place constraints on AI such that they can never become more intelligent that their creators. Perhaps we might decide to keep them domesticated. Useful, but never dangerous.

This sounds at first like a reasonable course of action, except for the fact that such constraints could never be universally maintained. Someone, somewhere, would breach the protocols, and recursively self-improving AI would be born. And if it turned out to be unfriendly, we would be defenseless against it.

This is why we must find a way to develop friendly super-intelligent AI before someone either deliberately or accidentally creates the other kind. Because only friendly AI would be able to stay ahead of and restrain the destructive kind.

Although, there may be another way to defend against unfriendly super-intelligent AI: make us super-intelligent. A parallel increase in human intelligence, via implantable augmentation devices, would make humans smart enough to defend ourselves against rogue AIs.

Your thoughts are welcomed. Stay tuned.

Singularity & The Price of Rice is updated daily; the easiest way to get your daily dose is by subscribing to our news feed. Stay on top of all our updates by subscribing now via RSS or Email.

2 comments :

Spaceman Spiff said...

I think someone should make a sequel to Dr. Stragelove.How bout Dr. Strangelove 2, or, How I Learned to Stop Worrying and Love AI?

Anonymous said...

As you mentioned, there's seemingly no good way to limit progress in AI. The basic tools are just too important to our culture at this point. Compilers illegal, that'd be the equivalent of trying to ban glue. Worse even, given the current sophistication of anonymous p2p networks. Even worse is the sociological aspects. Most programmers at this point are either normal folks in it for the money, nerds, or geeks. The first two would be easily put off by being told not to, the third would almost be forced 'to' work on it if a law was passed to prevent it. Trying to stop a computer geek from coding something is like herding cats.

Really, the only constraint I can think of that might work is if advanced AI wound up being tightly bound into either a heavy level of interaction with the public, or needing a huge amount of individual processors instead of a smaller number simulating that.