Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Out of Sight, Out of Mind: Opacity, E(X)pl(AI)nability and International Humanitarian Law
0
Zitationen
1
Autoren
2025
Jahr
Abstract
The increasing reliance on opaque Artificial Intelligence-based Decision Support Systems (AI-DSSs) in armed conflicts raises pressing questions about their compatibility with international humanitarian law (IHL). This article examines the legal implications of deploying AI-DSSs in military operations, focusing on their transparency and (lack of) explainability. It argues that opacity undermines the IHL principles of distinction and proportionality, potentially resulting in unlawful harm. The analysis explores whether Explainable AI (XAI) can mitigate these risks and whether its integration should be considered a normative requirement under IHL. The article concludes that, while XAI may enhance compliance - particularly in system development and post-deployment review - its operational use remains fraught with challenges. A cautious, research-informed approach is therefore essential before XAI can be embedded into the legal framework of IHL.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.725 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.886 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.512 Zit.
Fairness through awareness
2012 · 3.302 Zit.
AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations
2018 · 3.202 Zit.