Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
ChatGPT and the Law: Exploring the Liability and Accountability Frameworks for AI-Assisted Decision-Making
0
Zitationen
1
Autoren
2025
Jahr
Abstract
The rapid integration of generative AI models like ChatGPT into decision-making processes across sectors such as healthcare, finance, and legal services has amplified concerns over liability and accountability. This research article delves into the evolving legal landscapes governing AI-assisted decisions, examining how traditional tort, contract, and regulatory laws apply to errors, biases, or harms stemming from AI outputs. By analyzing landmark cases, including those involving algorithmic discrimination and misinformation propagation, the study highlights gaps in current frameworks, such as the attribution of fault between AI developers, users, and deployers. It proposes a hybrid accountability model that incorporates strict liability for high-risk applications, mandatory transparency requirements, and ethical auditing mechanisms to mitigate risks. Drawing on interdisciplinary insights from law, ethics, and computer science, the article argues for proactive international standards to ensure responsible AI deployment. Ultimately, it underscores the need for adaptive legal reforms to balance innovation with public protection in an AI-driven era.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.553 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.444 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.943 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.792 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.