Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Patient‐Centered Equitable and Safe Artificial Intelligence in Otolaryngology–Head and Neck Surgery
2
Zitationen
3
Autoren
2024
Jahr
Abstract
Case PresentationIn this hypothetical situation, a patient is diagnosed with early-stage laryngeal cancer.They are asked to submit a voice sample for a study that is building an algorithm to screen head and neck cancer based on voice data.After discussions with the investigators, the patient consents to the study and submits voice recordings.Several months later, they call the clinic office upset and are concerned that their voice recordings may be accessed by large corporations and that they will be able to be identified with their voice recordings leading to adverse insurance coverage, employment, and personal consequences.They feel that the risks of participating in the study were not adequately explained by the investigators, and they wish to withdraw from the study and all future clinical care with the hospital system as well. PointThe current safeguards on artificial intelligence (AI) and machine learning (ML) in otolaryngology are not adequate to avoid biased systems that impact applicability, usability, access to care, and patient's privacy.To address this risk of bias and associated harm, the US Department of Health and Human Services (HHS) ruled in April 2024, under section 1557 of the Affordable Care Act, that the responsibility resided with covered entities, including health care providers, when AI/ML decision support tools lead to biased care.Indeed, one major concern about AI/ML is that it may exacerbate existing systemic discrimination, causing mistrust among patients.Unfortunately, there is an unequal representation of different racial and ethnic minority groups in clinical research biorepositories, as the majority are comprised of Caucasians.Furthermore, health care algorithms rely heavily on data from large hospitals in California, Massachusetts, and New York.Reliance on identical or similar foundation models, that is, "algorithmic monoculture," may institutionalize standardized errors and lead
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.391 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.257 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.685 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.501 Zit.