Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Detecting model misconducts in decentralized healthcare federated learning
53
Zitationen
2
Autoren
2021
Jahr
Abstract
BACKGROUND: To accelerate healthcare/genomic medicine research and facilitate quality improvement, researchers have started cross-institutional collaborations to use artificial intelligence on clinical/genomic data. However, there are real-world risks of incorrect models being submitted to the learning process, due to either unforeseen accidents or malicious intent. This may reduce the incentives for institutions to participate in the federated modeling consortium. Existing methods to deal with this "model misconduct" issue mainly focus on modifying the learning methods, and therefore are more specifically tied with the algorithm. BASIC PROCEDURES: In this paper, we aim at solving the problem in an algorithm-agnostic way by (1) designing a simulator to generate various types of model misconduct, (2) developing a framework to detect the model misconducts, and (3) providing a generalizable approach to identify model misconducts for federated learning. We considered the following three categories: Plagiarism, Fabrication, and Falsification, and then developed a detection framework with three components: Auditing, Coefficient, and Performance detectors, with greedy parameter tuning. MAIN FINDINGS: We generated 10 types of misconducts from models learned on three datasets to evaluate our detection method. Our experiments showed high recall with low added computational cost. Our proposed detection method can best identify the misconduct on specific sites from any learning iteration, whereas it is more challenging to precisely detect misconducts for a specific site and at a specific iteration. PRINCIPAL CONCLUSIONS: We anticipate our study can support the enhancement of the integrity and reliability of federated machine learning on genomic/healthcare data.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.694 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.984 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.802 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.499 Zit.
Xception: Deep Learning with Depthwise Separable Convolutions
2017 · 18.702 Zit.