Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Navigating Ai in Healthcare: Examining Medical Liability and the Imperative of Informed Consent in Addressing AI-driven Prescription Errors
1
Zitationen
3
Autoren
2024
Jahr
Abstract
With the advent of information technology and Artificial Intelligence the world started to walk towards the technological civilisation. AI has marked an integral place in almost all sectors throughout the world including medical sector. The issue lies in determining the liability for AI generated medical treatments which results in causing harm to the life of the patients. AI driven medical decisions may result in serious injury or error. This paper tries to analyse the challenges on attributing liability i.e. upon AI developers and hospitals employing AI in their treatments. There is no legislation as to medical or tortuous liability on part of AI. But there are numerous cases where the medical professionals sought the help of Artificial Intelligence for medical ambiguities. Some doctors were worried about in trusting AI generated solutions for which it may lead to severe liability and prosecution threats. Technological advancement is essential which facilitates country’s development in every sector. This paper highlights the importance of AI in healthcare and need to impose different levels of liability on these players. It also analyse compensatory liability when fault has occurred. We analyse existing legal frameworks, ethical guidelines, different cases and highlighting the need for a comprehensive approach to AI liability in health care. This research explores the regulatory approaches to medical AI liability across different jurisdictions identifying best practices and areas of improving patient’s safety. At the end this paper seeks to ensure that AI in healthcare is developed and deployed responsibly, prioritizing patient’s safety and well-being while fostering innovation.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.485 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.371 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.827 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.549 Zit.