Religion Ethnicity Artificial Intelligence
Back

Blind spots in AI ethics

29 August 2024

This paper critically discusses blind spots in AI ethics. AI ethics discourses typically stick to a certain set of topics concerning principles evolving mainly around explainability, fairness, and privacy. All these principles can be framed in a way that enables their operationalization by technical means. However, this requires stripping down the multidimensionality of very complex social constructs to something that is idealized, measurable, and calculable. Consequently, rather conservative, mainstream notions of the mentioned principles are conveyed, whereas critical research, alternative perspectives, and non-ideal approaches are largely neglected. Hence, one part of the paper considers specific blind spots regarding the very topics AI ethics focusses on. The other part, then, critically discusses blind spots regarding to topics that hold significant ethical importance but are hardly or not discussed at all in AI ethics. Here, the paper focuses on negative externalities of AI systems, exemplarily discussing the casualization of clickwork, AI ethics’ strict anthropocentrism, and AI’s environmental impact. Ultimately, the paper is intended to be a critical commentary on the ongoing development of the field of AI ethics. It makes the case for a rediscovery of the strength of ethics in the AI field, namely its sensitivity to suffering and harms that are caused by and connected to AI technologies.

Thilo Hagendorff, University of Stuttgart: Stuttgart, German
Research Group Leader (Interchange Forum for Reflecting on Intelligent Systems).

Журнал “AI and Ethics”, 2022.