Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Adaptive, Privacy-Preserving Small Language Models for Multi-Task Clinical Assistance
0
Zitationen
7
Autoren
2026
Jahr
Abstract
The purpose of this study is to evaluate whether a single, fine-tuned SLM can match or exceed the performance of LLMs across diverse clinical tasks, enabling hospitals to build tailored, privacy-preserving, efficient, and deployable language models that do not require managing multiple task-specific systems. We used SLMs of varying sizes and applied low-rank adaptation (LoRA) for fine-tuning across three clinical tasks: (1) medical report labeling, (2) DICOM series description harmonization, and (3) impression generation from findings. These tasks were constructed using two datasets: the public Open-i Indiana University Chest X-ray Dataset and an in-house brain MRI DICOM metadata dataset. We compared single-task SLMs, a multi-task SLM (representing our proposed configuration), and GPT-4o using zero-shot and few-shot prompting. We found OPT-350 m to be the optimal SLM. In medical report labeling, the multi-task SLM achieved an F1 score of 0.894 compared to additional prompt-engineered GPT-4o's 0.728. In DICOM series description harmonization, the multi-task achieved an accuracy of 0.975 compared to additional prompt-engineered GPT-4o's 0.878. In impression generation from findings, the multi-task SLM achieved an average Likert scale score of 4.39 ± 1.00, compared to GPT-4o's 3.65 ± 1.00 (p = 0.0008). This study demonstrates that a single fine-tuned SLM can serve as a general-purpose clinical assistant, offering performance on par with or better than larger models. With lower resource requirements, greater customizability, privacy protection, and strong task generalization, fine-tuning one SLM to support multiple clinical tasks meets the practical demands of clinical AI deployment in both high-resource and resource-limited healthcare settings.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.402 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.894 Zit.
Deep Learning with Differential Privacy
2016 · 5.627 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.595 Zit.
Federated Machine Learning
2019 · 5.579 Zit.