Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Making Privacy-Preserving AI Accessible: A Practitioner-Oriented Framework
0
Zitationen
3
Autoren
2025
Jahr
Abstract
The increasing deployment of machine learning systems in sensitive domains has heightened awareness of privacy risks, yet significant barriers remain in translating theoretical privacy guarantees into practical implementations. Building on the NIST Adversarial Machine Learning Taxonomy (2025), we present a community-driven framework that addresses the implementation gap in privacy-preserving machine learning (PPML). Our contribution centers on a curated repository of over 30 privacy-preserving tools, each mapped to specific adversarial threats and accompanied by implementation guidance, code examples, and empirically grounded performance assessments. We organize these tools around five operational ML pipeline phases: Data Collection, Data Processing, Model Training, Model Deployment, and Privacy Governance, with systematic risk identification and structured decision frameworks for each phase. We illustrate the framework’s practical application through MedAI, a case study of a fictitious healthcare company that demonstrates methodical privacy-preserving technique selection in the model training phase. This work contributes to the broader goal of making privacy-aware AI development more accessible by providing actionable guidance that bridges the theory-practice gap in PPML implementation.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.809 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.896 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.556 Zit.
Fairness through awareness
2012 · 3.317 Zit.
AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations
2018 · 3.292 Zit.