Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Bias-Aware Multimodal Medical Diagnosis with Large Language Models for Underrepresented Conditions
0
Zitationen
2
Autoren
2025
Jahr
Abstract
We introduce a real-time multimodal AI assistant for medical diagnosis that combines large language models (LLMs) with structured patient data, medical imaging, and spoken physician inputs. The system aims to enhance early and accurate diagnosis by integrating diverse clinical information within a unified workflow. It employs a CNN for interpreting medical images, leverages speech-to-text to transcribe verbal notes, and transforms structured patient data into natural language prompts for LLM-driven reasoning.To mitigate diagnostic bias from class-imbalanced training data, we propose a bias-aware approach that integrates class-weighted loss for image models, input prompt rebalancing for LLMs, and human-in-the-loop feedback. Tests on a synthetic yet clinically relevant dataset show improved performance on underrepresented classes versus a baseline multimodal LLM system. We also evaluate GPT-4 (OpenAI), LLaMA-3 (Meta), and DeepSeek V3, comparing diagnostic accuracy, latency, and bias sensitivity. Results underscore the value of combining multimodal AI with imbalance-aware techniques to enhance fairness and reliability in AI-driven healthcare.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.366 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.716 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.254 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.678 Zit.
Artificial intelligence in healthcare: past, present and future
2017 · 4.430 Zit.