OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 27.03.2026, 12:31

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Artificial Intelligence Face Swapping: Promise and Peril in Health Care

2024·3 Zitationen·Mayo Clinic Proceedings Digital HealthOpen Access
Volltext beim Verlag öffnen

3

Zitationen

1

Autoren

2024

Jahr

Abstract

We read the recent article by Hou et al1Hou J.-C. Li C.-J. Chou C.-C. et al.Artificial intelligence-based face transformation in patient seizure videos for privacy protection.Mayo Clin Proc Digit Heal. 2023; 1: 619-628Abstract Full Text Full Text PDF Google Scholar with great interest. We believe that the paper offers a pivotal insight into the nuanced application of artificial intelligence (AI) in health care; particularly in enhancing patient privacy through face swapping or face transformation. Although the study commendably addresses the use of AI for protecting patient identities in medical videos, it inadvertently highlights a broader, more complex challenge: the rising threat of AI-generated disinformation, or deepfakes. The phenomenon of deepfakes, characterized by hyper-realistic digital fabrications crafted using AI, poses a significant threat in the health care domain. Creating deepfakes no longer requires extensive programming expertise, making it accessible to a broader user base. Generating and disseminating health-related disinformation has become alarmingly straightforward, requiring minimal technical expertise.2Menz B.D. Modi N.D. Sorich M.J. Hopkins A.M. Health disinformation use case highlighting the urgent need for artificial intelligence vigilance: weapons of mass disinformation.JAMA Intern Med. 2024; 184: 92-96Crossref PubMed Scopus (9) Google Scholar This ease of access raises an urgent concern over the potential misuse of AI to undermine public trust in the medical and scientific community. Hou et al1Hou J.-C. Li C.-J. Chou C.-C. et al.Artificial intelligence-based face transformation in patient seizure videos for privacy protection.Mayo Clin Proc Digit Heal. 2023; 1: 619-628Abstract Full Text Full Text PDF Google Scholar focused on using this technology to protect the privacy of patients in medical videos. What happens when malicious intent turns this technology toward the other end of the spectrum—creating videos vilifying vaccines or promoting vaping? Or marketing unregulated medical and dental products to the unsuspecting public by deepfaked figures of renown? What about herbal and alternative medicines? In such cases, the line between reality and fabrication becomes dangerously blurred to the uninitiated observer. The implications for public health can be dire—from the direct harm resulting from using untested, ineffective, or harmful products to the erosion of trust in legitimate medical advice and professionals.3Blendon R.J. Benson J.M. Hero J.O. Public trust in physicians—US medicine in international perspective.N Engl J Med. 2014; 371: 1570-1572Crossref PubMed Scopus (255) Google Scholar This juxtaposition—AI as a tool for enhancing patient privacy in clinical settings and AI as a medium for crafting disinformation—underscores a critical need for a balanced and informed approach to AI in health care. At this juncture, the absence of comprehensive national or global regulations concerning the use of AI in health care is starkly evident. There is an imperative need for collaborative action involving health care professionals, AI scientists, and legislators to establish robust frameworks to ensure the responsible use of AI. The goal should be 2-fold: to harness AI’s potential in enhancing patient care and privacy while mitigating the risks associated with AI-generated disinformation. This balanced approach is crucial to maintaining the integrity and trustworthiness of medical information in the age of AI. The authors report no competing interests. Reply to: Artificial Intelligence Face Swapping: Promise and Peril in Health CareMayo Clinic Proceedings: Digital HealthVol. 2Issue 1PreviewWe thank Professor Patil for his pertinent comments in response to our article entitled Artificial intelligence-based face transformation in patient seizure videos for privacy protection.1 We agree that like any other technological or scientific advance, artificial intelligence methods that alter video material can be used for either beneficent or malevolent intent. There is growing literature on the risks and pitfalls of deepfakes in digital media, as we mentioned in our paper.2 Deepfakes are artificial but hyper-realistic videos, audio, and images created by algorithms that can be used to manipulate opinion by conveying a false message convincingly disguised as fact. Full-Text PDF Open Access

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationMisinformation and Its ImpactsPatient Dignity and Privacy
Volltext beim Verlag öffnen