OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 05.04.2026, 02:55

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Patient‐Centered Equitable and Safe Artificial Intelligence in Otolaryngology–Head and Neck Surgery

2024·2 Zitationen·OtolaryngologyOpen Access
Volltext beim Verlag öffnen

2

Zitationen

3

Autoren

2024

Jahr

Abstract

Case PresentationIn this hypothetical situation, a patient is diagnosed with early-stage laryngeal cancer.They are asked to submit a voice sample for a study that is building an algorithm to screen head and neck cancer based on voice data.After discussions with the investigators, the patient consents to the study and submits voice recordings.Several months later, they call the clinic office upset and are concerned that their voice recordings may be accessed by large corporations and that they will be able to be identified with their voice recordings leading to adverse insurance coverage, employment, and personal consequences.They feel that the risks of participating in the study were not adequately explained by the investigators, and they wish to withdraw from the study and all future clinical care with the hospital system as well. PointThe current safeguards on artificial intelligence (AI) and machine learning (ML) in otolaryngology are not adequate to avoid biased systems that impact applicability, usability, access to care, and patient's privacy.To address this risk of bias and associated harm, the US Department of Health and Human Services (HHS) ruled in April 2024, under section 1557 of the Affordable Care Act, that the responsibility resided with covered entities, including health care providers, when AI/ML decision support tools lead to biased care.Indeed, one major concern about AI/ML is that it may exacerbate existing systemic discrimination, causing mistrust among patients.Unfortunately, there is an unequal representation of different racial and ethnic minority groups in clinical research biorepositories, as the majority are comprised of Caucasians.Furthermore, health care algorithms rely heavily on data from large hospitals in California, Massachusetts, and New York.Reliance on identical or similar foundation models, that is, "algorithmic monoculture," may institutionalize standardized errors and lead

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationMedical Malpractice and Liability IssuesEthics in Clinical Research
Volltext beim Verlag öffnen