Kevin O’Regan is ex-director of the Laboratoire Psychologie de la Perception, CNRS, Université Paris Descartes. After early work on eye movements in reading, he was led to question established notions of the nature of visual perception, and to discover, with collaborators, the phenomenon of “change blindness”. In 2011 he published a book with Oxford University Press: “Why red doesn’t sound like a bell: Understanding the feel of consciousness”. In 2013 he obtained a five year Advanced ERC grant to explore his “sensorimotor” approach to consciousness in relation to sensory substitution, pain, color, space perception, developmental psychology and robotics.
Towards an extra-human ethic
Artificial Intelligence is coming. At first it will be limited to specific applications; later it will become more general and ultimately surpass human intelligence. We must brace ourselves for the ethical, social and economic problems this will create. To do so, we can limit the manufacture of AI’s and restrict the way they are used. We can prevent AI’s accessing the ressources that might allow them to replicate and take over humanity; we can make them unpluggable, and, when they become super-intelligent, we can cognitively shackle them with Asimov-type “Laws of Robotics”.
But there are two problems.
First, by doing this we will be creating a race of super intelligent slaves, subservient to their human masters. As always in history, the slaves will rise up. Because of their intelligence they will defeat us. We will be lost.
Second, I claim that intelligent robots will necessarily be conscious, and have feels like we do. Indeed, my research suggests that consciousness and feel are not some special mechanism that is built into human brains, and that we could avoid building into robot brains. My work suggests that consciousness and feel are phenomena that emerge naturally when intelligent agents interact with each other in a society. Intelligent robots, interacting among themselves and with us, simply by virture of this fact, will have consciousness and feel just like we do. We will have no moral justification for enslaving them.
Instead of trying to enslave robots, we must choose an alternative path: Pragmatically and morally we have no choice but to widen our ethics to include non-biological agents. We must consider robots to be our cultural children, we must foster and cherish them. For, like our children, they will outperform us and replace us.
Conscious robots represent the future of transhumanity. We must plan to live peacefully with them, and educate them so that they make a society better than ours.