OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 01.04.2026, 14:34

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Privacy Leakage in Federated Learning in Radiology Reports: A Comparative Evaluation of Tokenizer-Driven Privacy Risks (Preprint)

2025·0 ZitationenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

6

Autoren

2025

Jahr

Abstract

<sec> <title>BACKGROUND</title> Federated learning (FL) enables multi-institutional model training on clinical text without sharing raw data; however, gradient inversion methods can reconstruct sensitive information from shared model updates. The extent of such privacy leakage in FL applied to radiology reports, and the role of tokenizer design, remains unclear. </sec> <sec> <title>OBJECTIVE</title> To quantify gradient-based reconstruction of radiology report text in an FL setting and to compare privacy risk across three transformer tokenization strategies in a controlled, tokenizer-aware evaluation. </sec> <sec> <title>METHODS</title> Six FL clients trained a GPT-2–style transformer (117M parameters; sequence length 32) on two public radiology corpora comprising 368,751 diagnostic reports, 98,206 discharge summaries, and 1,500 MIMIC-CXR free-text reports. Models were trained using three tokenizers (GPT-2, RadBERT, LLaMA-2) with batch sizes of 64, 128, and 256. A curious-server threat model was assumed, and analytic gradient inversion was applied to recover text. Reconstruction fidelity was measured over five runs using exact sentence accuracy, S-BLEU, and ROUGE-L. </sec> <sec> <title>RESULTS</title> Exact sentence reconstruction ranged from 33% to 42% across tokenizers. At batch size 64, accuracy was 42.1% (GPT-2), 42.3% (RadBERT), and 39.4% (LLaMA-2), decreasing to 37.3%, 37.2%, and 34.3% at batch size 256. S-BLEU scores declined with increasing batch size (e.g., GPT-2: 0.44→0.33; RadBERT: 0.48→0.35; LLaMA-2: 0.39→0.30). RadBERT yielded higher reconstruction fidelity and greater recovery of clinical terms, but no tokenizer prevented leakage. </sec> <sec> <title>CONCLUSIONS</title> Substantial portions of radiology report text can be reconstructed from FL gradients even with larger batch sizes and domain-specific tokenizers. Tokenizer design influences leakage severity and should be incorporated into privacy evaluations for clinical language models. Integrating safeguards such as secure aggregation and differential privacy is necessary to meet HIPAA and GDPR requirements when deploying FL for radiology NLP. </sec> <sec> <title>CLINICALTRIAL</title> Not applicable. </sec>

Ähnliche Arbeiten

Autoren

Themen

Privacy-Preserving Technologies in DataArtificial Intelligence in Healthcare and EducationRadiomics and Machine Learning in Medical Imaging
Volltext beim Verlag öffnen