Gianluca Baldassarre


Gianluca Baldassarre received the B.A. and M.A. degrees in economics and the M.Sc. degree in cognitive psychology and neural networks from the University of Rome “La Sapienza,” Rome, Italy, in 1998 and 1999, respectively, and the Ph.D. degree in computer science with the University of Essex, Colchester, U.K., in 2003, with a focus on planning with neural networks. He was a Post-Doctoral Fellow with the Italian Institute of Cognitive Sciences and Technologies, National Research Council, Rome, researching on swarm robotics, where he has been a Researcher, since 2006, and coordinates the Research Group that he founded called the Laboratory of Computational Embodied Neuroscience. From 2006 to 2009, he was a Team Leader of the EU Project “ICEA—Integrating Cognition Emotion and Autonomy” and the Coordinator of the European Integrated Project “IM-CLeVeR— Intrinsically-Motivated Cumulative-Learning Versatile Robots,” from 2009 to 2013, and is currently Team Leader of the EU Project “GOAL-Robots – Goal-based Open-ended Autonomous Learning Robots”. He has over 100 international peer-review publications. His cur- rent research interests include cumulative learning of multiple sensorimotor skills driven by extrinsic and intrinsic motivations. He studies these topics with two interdisciplinary approaches: with computational models constrained by data on brain and behavior, aiming to understand the latter ones and with machine-learning/robotic approaches, aiming to produce technologically useful robots.


Will robots acquire a cognition and emotions similar to humans’ ones? If so, what will be the impact on the status of humans?

What are the ethical implications of open-ended developing robots? In particular, what will be the relative status of future intelligent robots with respect to humans if they will progressively acquire increasingly sophisticated cognition and emotions? I face this question by adopting a perspective for which cognition, emotions, and even consciousness, are information processing phenomena that can be fully explained in materialistic terms and hence be acquired by robots. This implies that robots endowed with a suitable developmental program for open-ended development will one day be endowed with cognition, emotions, and even consciousness similar, in complexity, to those of humans. The underlying mechanisms supporting some of these functions might differ from those of humans, but they will have a commensurable sophistication. When this will happen, and will be fully recognized by humans, humans will have to recognize to robots a nature of sentient intelligent beings analogous to their own, and hence also attribute to them a status and legal rights similar theirs. Humans will thus become just one sentient intelligent species among many others. How will humans feel once no more unique? And will they become “redundant” when surpassed in various cognitive and emotional features by some artificial beings, thus possibly incurring an existential risk? Not necessarily if a “equal diversity principle”, for which no natural or artificial being is superior to other beings in absolute terms, will become a universal value among intelligent beings, natural and artificial ones. This principle, resonating with the no-free-lunch theorem in machine learning and the many-species-best-for-their-niche idea in biology, is grounded on the fact that any intelligent/emotional being, or some of its features, could be “better” than other beings/features in particular conditions. This principle of diversity would make any intelligent sentient being personally feel valuable, and also be considered as such by all other intelligent sentient beings, thus deserving the right to exist and being worth preserving as incorporating knowledge which is potentially valuable in given, possibly not-yet-existing, situations.

back to top