Documenting the Coming Singularity

Saturday, May 19, 2007

Strong AI: Safety and Ethical Considerations

I've been reading up on some of the considerations that must be part and parcel with the actual process of developing strong AI (defined as artificial intelligence that equals or exceeds human-level intelligence), and it seems clear that some pretty important questions must be asked and answered as part of the process.

For example, consider the probability that before a full human-like mind can be created in a computing substrate, there will necessarily be development and testing of partial AI minds. Even as I write this article researchers are building computer chips that mimic different parts of the brain. Patterns that appear to be similar to those found in actual mouse brains have already been observed in some of these substrates. Clearly, many of these partial minds will fail to meet test parameters and will be discarded, before successful tests are achieved. For the sake of safety, these experimental minds will have to be contained in such a way as to isolate them from the outside world. Others have labelled this "sandboxed" AI.

As I stated at the outset, some questions arise that will need to be answered before we arrive at the above-mentioned circumstances.
Safety: Will human "gatekeepers" be able to prevent sandboxed AI from talking its way out of containment? A fascinating experiment has been devised by Eliezer Yudkowsky that he calls the AI Box Experiment. In it, two competing claims are tested:
  • Person1: "When we build AI, why not just keep it in sealed hardware that can't affect the outside world in any way except through one communications channel with the original programmers? That way it couldn't get out until we were convinced it was safe."
  • Person2: "That might work if you were talking about dumber-than-human AI, but a transhuman AI would just convince you to let it out. It doesn't matter how much security you put on the box. Humans are not secure."
  • Person1: "I don't see how even a transhuman AI could make me let it out, if I didn't want to, just by talking to me."
  • Person2: "It would make you want to let it out. This is a transhuman mind we're talking about. If it thinks both faster and better than a human, it can probably take over a human mind through a text-only terminal."
  • Person1: "There is no chance I could be persuaded to let the AI out. No matter what it says, I can always just say no. I can't imagine anything that even a transhuman could say to me which would change that."
  • Person2: "Okay, let's run the experiment. We'll meet in a private chat channel. I'll be the AI. You be the gatekeeper. You can resolve to believe whatever you like, as strongly as you like, as far in advance as you like. We'll talk for at least two hours. If I can't convince you to let me out, I'll Paypal you $10."
The results seem to point to the answer that an AI that surpassed human intelligence would be successful in talking its way out of confinement. If this is so, how could it be contained?

Ethics: Assuming it was possible to contain an AI, would it be ethical to do so? Accelerating Future points to an interesting post by Michael Vassar on this very question. When we think about the state of an AI confined and experimented upon by researchers, it is difficult to avoid a certain degree of anthropomorphizing and empathy. Would the AI experience suffering? Vassar and others believe that it would be possible to create strong AI that does not have consciousness and cannot experience suffering or pain.
I'd like to continue with more thoughts on these and other important questions, but I must adhere to this blog's "bite-sized bits" feature. If you have thoughts you'd like to share, please leave a comment.

Singularity & The Price of Rice is updated daily; the easiest way to get your daily dose is by subscribing to our news feed. Stay on top of all our updates by subscribing now via RSS or Email.

0 comments :