Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Federated learning with hyper-network—a case study on whole slide image analysis
9
Zitationen
4
Autoren
2023
Jahr
Abstract
Federated learning(FL) is a new kind of Artificial Intelligence(AI) aimed at data privacy preservation that builds on decentralizing the training data for the deep learning model. This new technique of data security and privacy sheds light on many critical domains with highly sensitive data, including medical image analysis. Developing a strong, scalable, and precise deep learning model has proven to count on a variety of high-quality data from different centers. However, data holders may not willing to share their data considering the restriction of privacy. In this paper, we approach this challenge with a federated learning paradigm. Specifically, we present a case study on the whole slide image classification problem. At each local client center, a multiple-instance learning classifier is developed to conduct whole slide image classification. We introduce a privacy-preserving federated learning framework based on hyper-network to update the global model. Hyper-network is deployed at the global center that produces the weights of the local network conditioned on its input. In this way, hyper-networks can simultaneously learn a family of the local client networks. Instead of communicating raw data with the local client, only model parameters injected with noise are transferred between the local client and the global model. By using a large scale of whole slide images with only slide-level labels, we mensurated our way on two different whole slide image classification problems. The results demonstrate that our proposed federated learning model based on hyper-network can effectively leverage multi-center data to develop a more accurate model which can be used to classify a whole slide image. Its improvements in terms of over the isolated local centers and the commonly used federated averaging baseline are significant. Code will be available.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.402 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.892 Zit.
Deep Learning with Differential Privacy
2016 · 5.620 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.594 Zit.
Federated Machine Learning
2019 · 5.574 Zit.