Documenting the Coming Singularity

Friday, April 10, 2009

Avoiding the "Terminator" Outcome - Friendly artificial intelligence

It would be good for everyone to know this stuff....Barry

Wikipedia

A Friendly Artificial Intelligence or FAI is an artificial intelligence (AI) that has a positive rather than negative effect on humanity. Friendly AI also refers to the field of knowledge required to build such an AI. This term particularly applies to AIs which have the potential to significantly impact humanity, such as those with intelligence comparable to or exceeding that of humans ("superintelligence"; see strong AI and technological singularity.) This specific term was coined by researcher Eliezer Yudkowsky of the Singularity Institute for Artificial Intelligence as a technical term distinct from the everyday meaning of the word "friendly"; however, the concern is much older.

Many experts have argued that AI systems with goals that are not perfectly identical to or very closely aligned with our own are intrinsically dangerous unless extreme measures are taken to ensure the safety of humanity. Decades ago, Ryszard Michalski, one of the pioneers of Machine Learning, taught his Ph.D. students that any truly alien mind, to include machine minds, was unknowable and therefore dangerous. More recently, Eliezer Yudkowsky has called for the creation of “Friendly AI” to mitigate the existential threat of hostile intelligences. Stephen Omohundro argues that all advanced AI systems will, unless explicitly counteracted, exhibit a number of basic drives/tendencies/desires because of the intrinsic nature of goal-driven systems and that these drives will, “without special precautions”, cause the AI to act in ways that range from the disobedient to the dangerously unethical.

Read more>>

[via Accelerating Future]

Follow me on Twitter. Technological Singularity and Futurism is updated often; the easiest way to get your regular dose is by subscribing to our news feed. Stay on top of all our updates by subscribing now via RSS or Email.

0 comments :