Benjamin Kuipers

Biography

Benjamin Kuipers is a Professor of Computer Science and Engineering at the University of Michigan.  He received his B.A. from Swarthmore College, and his Ph.D. from MIT.  He served as Department Chair at the University of Texas at Austin, and is a Fellow of AAAI, IEEE, and AAAS. He investigates the representation of commonsense and expert knowledge, with particular emphasis on the effective use of incomplete knowledge.  His research accomplishments include developing the QSIM algorithm for qualitative simulation, the Spatial Semantic Hierarchy models of knowledge for robot exploration and mapping, and methods whereby an agent without prior knowledge of its sensors, effectors, or environment can learn its own sensorimotor structure, the spatial structure of its environment, and its own object and action abstractions for higher-level interactions with its world.

Abstract

Morality and Trust for Robots:  Questions and (a few) Answers

We foresee robots taking an increasing role in our society, as agents, not simply as tools.  For tools, the concerns are safety and effectiveness, but we need agents to be trustworthy.  Trust is necessary for cooperation and collaboration, which are beneficial for the society as well as for the individuals involved.  Therefore, to encourage trust, not only human beings, but also robot agents need morality and ethics.

The nature of morality, ethics, and trust has been discussed vigorously in recent years, partly due to the challenge of robot ethics.  A few conclusions are reasonably clear.  First, moral decisions are taken at multiple time-scales, including rapid response to urgent situations, deliberative reflection on less urgent situations as well as evaluating the outcomes of previous decisions, and gradual evolution of the prevailing norms in society.  Second, the physical and social world for moral decision-making is unboundedly complex, so the question of how to abstract that overwhelming complexity must be part of the moral decision, not prior to it.  Third, philosophical ethics provides several ethical theories, and AI provides computational representations for relevant aspects of the knowledge involved.  It is plausible that these are not alternative choices, but different aspects of a more complex decision architecture.  Fourth, robots are not (yet) “moral patients.”  While robots may someday have the “personhood” that makes the way they are treated into a moral issue, that day has not yet arrived.

Many questions remain open.  We hope to create robots that can make decisions according to a sense of right and wrong, but being a “moral agent” also requires being “held accountable” for those decisions.  What does it mean for a robot to be “held accountable”?  Most importantly, what is the actual content of the morality and ethics that governs the robot’s behavior?  Among humans, societies, subgroups, and individuals differ substantially in the social norms they follow.  Whose ethics and morality should a robot follow?  Whose trust should it be designed to earn?

back to top