Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
FedViTBloc: Secure and privacy-enhanced medical image analysis with federated vision transformer and blockchain
8
Zitationen
6
Autoren
2025
Jahr
Abstract
The increasing prevalence of cancer necessitates advanced methodologies for early detection and diagnosis. Early intervention is crucial for improving patient outcomes and reducing the overall burden on healthcare systems. Traditional centralized methods of medical image analysis pose significant risks to patient privacy and data security, as they require the aggregation of sensitive information in a single location. Furthermore, these methods often suffer from limitations related to data diversity and scalability, hindering the development of universally robust diagnostic models. Recent advancements in machine learning, particularly deep learning, have shown promise in enhancing medical image analysis. However, the need to access large and diverse datasets for training these models introduces challenges in maintaining patient confidentiality and adhering to strict data protection regulations. This paper introduces FedViTBloc, a secure and privacy-enhanced framework for medical image analysis utilizing Federated Learning (FL) combined with Vision Transformers (ViT) and blockchain technology. The proposed system ensures patient data privacy and security through fully homomorphic encryption and differential privacy techniques. By employing a decentralized FL approach, multiple medical institutions can collaboratively train a robust deep-learning model without sharing raw data. Blockchain integration further enhances the security and trustworthiness of the FL process by managing client registration and ensuring secure onboarding of participants. Experimental results demonstrate the effectiveness of FedViTBloc in medical image analysis while maintaining stringent privacy standards, achieving 67% accuracy and reducing loss below 2 across 10 clients, ensuring scalability and robustness.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.312 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.169 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.564 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.466 Zit.