Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
WHO is responsible? Towards the normativity of AI-driven BCI technologies in product liability in healthcare
0
Zitationen
2
Autoren
2026
Jahr
Abstract
Abstract The growing integration of artificial intelligence (AI)-based brain–computer interface (BCI) systems into healthcare intensifies a fundamental legal question: who should bear responsibility for damage caused by adaptive technologies that directly interact with the human brain? Whilst the European Union’s new Product Liability Directive (2024/2853) represents a significant step towards modernising liability for digital products, it remains largely oriented towards identifiable technical defects and relatively stable products. This article examines AI-driven BCI systems through the lens of algorithmic normativity and introduces the concept of a Reflexive Normative Cascade to analyse how responsibility evolves from pre-legal normative expectations to societal experiential feedback and, potentially, to reactive legal crystallisation. It argues that although societal experience clearly articulates concerns relating to autonomy, mental integrity, and transparency, these insights are not always successfully translated into stable liability rules under the current product liability framework. In particular, damages arising from normative design choices embedded in algorithms, continuous post-market adaptation, and distributed responsibility chains expose persistent accountability gaps for both injured parties and producers. By situating the Product Liability Directive within this reflexive process, the article contends that liability for AI-driven BCIs cannot be fully addressed through static doctrinal tools alone and calls for a dynamic normative framework integrating transparency, traceability, and complementary insurance mechanisms.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.527 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.419 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.909 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.578 Zit.