Vieri Giuliano Santucci received the B.Sc. degree in philosophy from the University of Pisa, Pisa, Italy, and the M.S. degree in theories and techniques of knowledge from the Faculty of Philosophy, University of Rome “La Sapienza,” Rome, Italy, and the Ph.D. degree in computer science from the University of Plymouth, Plymouth, U.K., in 2016, with a focus on the development of robotic architectures that allow artificial agents to autonomously improve their competences on the basis of the biologically-inspired concept of intrinsic motivations. He is a Researcher with the Istituto di Scienze e Tecnologie della Cognizione, Consiglio Nazionale delle Ricerce, Rome. He published in peer-reviewed journals and attended several international conferences, and actively contributed to the European Integrated Projects “IM-CLeVeR— Intrinsically-Motivated Cumulative-Learning Versatile Robots.” and “GOAL-Robots – Goal-based Open-ended Autonomous Learning Robots”. His current research interests include learning processes, motivations as well as to the concept of representations, in both biological and artificial agents.
Can we limit open-ended development?
A critical issue related to the constant development of sophisticated and powerful artificial systems is the
necessity to constrain potentially harmful and unethical behaviours.
Despite the increasing autonomy broadens the risk that artificial agents will undertake pathways leading
to unwanted effects, still where AI and robotics systems are focussed on specific tasks, final goals are
assigned by their developers (or final-users) that for this reason have to face the ethical value of their
But what if we assign artificial agents the meta-goal of discovering new goals and continuously acquire
knowledge and competence? How can we control or direct the activity of machines whose goal is
(potentially) to discover every possible activity?
In this talk we will try to highlight some important points related to intrinsically motivated open-ended
learning in relation to ethical issues:
1) In the realm of intrinsically motivated open-ended learning every constraint seems to clash with the
main purpose of this branch of research.
2) The autonomy in setting their own goals determines a shift for artificial agents that aligns them, in an
ethical perspective, with humans.
3) While in biology intrinsic motivations are a strategy that evolution discovered to maximise the fitness
of some species, in developmental robotics they are directly imposed as the overall implicit goal of
artificial agents. This overlooked fact provides artificial agents with all the power of curiosity typical of
IMs, without the boundaries that in biological agents are provided by “extrinsic motivations” (and in
particular by those drives related to the preservation of the single agents, its offsprings, its allies, its
4) As we are taking inspiration from biology to build curious and autonomous agents, we should try to
follow a similar pathway to make these agents autonomously develop ethical principles. So, on the one
hand we should make curiosity, autonomy and IMs flourish in a broader context that take into
consideration also other drives (both AI-centred and, hypothetically, human-centred). On the other hand,
we should consider the intimate connection between ethics and social environment and develop artificial
agents in similar conditions.
5) Finally we have to keep in mind that if we are looking for motivational (and ethical) autonomy we
have to face the “inevitable possibility” that artificial agents, as humans, will not always do “the right
thing” (whatever right means). And even that they will develop a completely different ethics with respect
to ours own.