WorldCon 2011 Report: Day Two – The Ethics of AI

Thursday, August 18, 2011

The Ethics of AIArtificial intelligence likely to be different from human or other biological intelligence we know. Will we recognize it when we see it? Will we recognize it soon enough?

The majority of the arguments focused on "sentience" and "self-awareness". How do we measure sentience, particularly in a non-biological system? The Turing Test? Is that enough? Or is it possibly biased to recognize human-like intelligences?

How do we measure self-awareness? Is the "mirror test" truly reflective (sorry!) of self-awareness or just demonstrative of understanding of how a mirror works?

For machine-based systems, the "person" is in the data. This leads to all sorts of good questions and parallels in terms of the function of operating systems, physical structures, and their similarities and differences in both types of life.

Humans have differing physical structures, but similar core elements: skeletons, circulatory systems, etc. and an autonomic systems to drive them (an operating system?). Computers have motherboards, processors and operating systems to drive them.

Humans have memories; computers have data. The argument was made human personalities are based on their knowledge and memories (data + subjective analysis). Can computers be subjective?

Can compassion be programmed into a computer? At what point does it become more than data and programming? How do we quantify human compassion?

An interesting point was made that we legally grant rights to humans at the age of 18 (this assumes a certain level of maturity--that got a laugh). How would we measure the "maturity level" of an AI? Children and developmentally disabled individuals are not granted certain rights in order to protect them. Their caregivers are assigned responsibility. How might this apply to an AI?

What happens should an AI misbehave or contradict--in other words, behave like a child? What happens if an AI commits a crime--intentionally or unintentionally? What would be an appropriate punishment?

A very interesting statement was made during this panel: The Three Laws of Robotics (from Isaac Asimov) are a form of slavery. Think about it.

(Note: Some of the questions and extrapolations included are mine as they occurred to me during the session.)