Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
CrossPriv
2
Zitationen
2
Autoren
2020
Jahr
Abstract
The design and implementation of artificial intelligence driven software that keeps user data private is a complex yet necessary requirement in the current times. Developers must consider several ethical and legal challenges while developing services which relay massive amount of private information over a network grid which is susceptible to attack from malicious agents. In most cases, organizations adopt a traditional model training approach where publicly available data, or data specifically collated for the task is used to train the model. Specifically in the healthcare section, the operation of deep learning algorithms on limited local data may introduce a significant bias to the system and the accuracy of the model may not be representative due to lack of richly covariate training data. In this paper, we propose CrossPriv, a user privacy preservation model for cross-silo Federated Learning systems to dictate some preliminary norms of SaaS based collaborative software. We discuss the client and server side characteristics of the software deployed on each side. Further, We demonstrate the efficacy of the proposed model by training a convolution neural network on distributed data of two different silos to detect pneumonia using X-Rays whilst not sharing any raw data between the silos.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.402 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.897 Zit.
Deep Learning with Differential Privacy
2016 · 5.629 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.595 Zit.
Federated Machine Learning
2019 · 5.585 Zit.