Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Autonomous AI systems in the face of liability, regulations and costs
68
Zitationen
5
Autoren
2023
Jahr
Abstract
Autonomous AI systems in medicine promise improved outcomes but raise concerns about liability, regulation, and costs. With the advent of large-language models, which can understand and generate medical text, the urgency for addressing these concerns increases as they create opportunities for more sophisticated autonomous AI systems. This perspective explores the liability implications for physicians, hospitals, and creators of AI technology, as well as the evolving regulatory landscape and payment models. Physicians may be favored in malpractice cases if they follow rigorously validated AI recommendations. However, AI developers may face liability for failing to adhere to industry-standard best practices during development and implementation. The evolving regulatory landscape, led by the FDA, seeks to ensure transparency, evaluation, and real-world monitoring of AI systems, while payment models such as MPFS, NTAP, and commercial payers adapt to accommodate them. The widespread adoption of autonomous AI systems can potentially streamline workflows and allow doctors to concentrate on the human aspects of healthcare.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.316 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.177 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.575 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.468 Zit.