Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A proposal for more useful AI ethics: hierarchical principlism & the principle of compassion
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Abstract Principlism is one of the leading approaches to AI ethics. Developed originally to address bioethical issues in medicine and research, the theory requires decision-makers to consider various ethical principles to justify their actions. Despite its dominance in bioethics and growing popularity in AI ethics, principlism faces serious theoretical and practical criticisms. One serious criticism is that since ethical principles are combined ad hoc, conflicts inevitably arise between them, leading to inconsistencies. While defenders of principlism propose a process for resolving such disputes, contemporary critics argue that this process is incomplete, and at best, principlist frameworks can only help structure analysis and justifications intelligently, but cannot provide definitive, action-guiding moral prescriptions. AI principlists have not adequately reckoned with this theoretical limitation. Here, I propose a solution to conflicts between principles by designating one principle as an arbitrating principle above others—what I call hierarchical principlism . Since attempts to use existing principles as arbiters have led to controversy, I suggest using a new principle—a modified version of the principle of beneficence requiring the minimization of suffering, which I call the principle of compassion —to arbitrate these conflicts. I argue that this approach, which I call compassionate principlism , leads to fewer moral objections and inconsistencies and provides more definitive action-guiding moral prescriptions in AI ethics than traditional principlism. I conclude by applying compassionate principlism to ethical dilemmas in AI ethics, including misinformation, bias, and automation.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.721 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.884 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.510 Zit.
Fairness through awareness
2012 · 3.302 Zit.
AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations
2018 · 3.200 Zit.