Religion Ethnicity Artificial Intelligence
Back

How a non-conscious robot could be an agent with capacity for morally responsible behaviour

29 August 2024

People have different opinions about which conditions robots would need to fulfil—and for what reasons—to be moral agents. Standardists hold that specific internal states (like rationality, free will or phenomenal consciousness) are necessary in artificial agents, and robots are thus not moral agents since they lack these internal states. Functionalists hold that what matters are certain behaviours and reactions—independent of what the internal states may be—implying that robots can be moral agents as long as the behaviour is adequate. This article defends a standardist view in the sense that the internal states are what matters for determining the moral agency of the robot, but it will be unique in being an internalist theory defending a large degree of robot responsibility, even though humans, but not robots, are taken to have phenomenal consciousness. This view is based on an event-causal libertarian theory of free will and a revisionist theory of responsibility, which combined explain how free will and responsibility can come in degrees. This is meant to be a middle position between typical compatibilist and libertarian views, securing the strengths of both sides. The theories are then applied to robots, making it possible to be quite precise about what it means that robots can have a certain degree of moral responsibility, and why. Defending this libertarian form of free will and responsibility then implies that non-conscious robots can have a stronger form of free will and responsibility than what is commonly defended in the literature on robot responsibility.

Atle Ottesen Søvik, MF Norwegian School of Theology, Systematic Theology, Oslo, Norway
Professor of Systematic Theology.

Журнал “AI and Ethics”, 2022.