Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Clinical Impact of AI-Enhanced Imaging: Improving Outcomes Through Visual Data
0
Zitationen
7
Autoren
2025
Jahr
Abstract
Purpose: The integration of AI-driven chatbots, specifically OpenAI’s ChatGPT, in medical fields such as anesthesia and critical care medicine has the potential to enhance communication through the generation of scientific illustrations. This study explores the efficacy, limitations, and biases associated with AI-generated medical images. Methods: Using ChatGPT, coupled with OpenAI's DALL-E, we simulated the process of generating medical illustrations based on textual prompts. We focused on the case of Chronic Heart Failure, analyzing multiple attempts to create accurate medical images based on researcher inputs. A qualitative assessment was performed to identify anatomical inaccuracies and biases. Additionally, the potential for visual literacy to augment AI-generated outputs was discussed. Results: Several images were generated representing CHF, yet these outputs revealed significant limitations. Critical anatomical errors, such as the depiction of a patient with three kidneys and incorrect organ positioning, were observed. Additionally, gender bias emerged, as the AI failed to reliably generate female-specific medical images. These inaccuracies and biases stemmed from the underlying data and algorithms. Furthermore, the lack of expertise in crafting precise textual prompts led to challenges in obtaining useful images, highlighting the need for specialized training. Conclusions: AI tools like ChatGPT hold promise for advancing visual communication in medicine, but current limitations in accuracy and bias management remain critical challenges. Careful oversight, human expertise, and multidisciplinary collaboration are essential to ensure that AI-generated content is both reliable and equitable. Training in visual literacy and image interpretation could mitigate some of these challenges, promoting the safe adoption of AI in clinical and scientific contexts.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.316 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.177 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.575 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.468 Zit.