OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 01.04.2026, 10:54

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Privacy-Preserving, Regulatory-Grade Real-Time Healthcare AI

2025·0 Zitationen·American Journal of TechnologyOpen Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2025

Jahr

Abstract

Aim: This study aims to design and evaluate a roadmap for deploying privacy-preserving, regulator-grade, real-time healthcare artificial intelligence (AI) systems. The systems should be capable of delivering high diagnostic accuracy, low latency, and strict compliance with healthcare data protection regulations without centralizing protected health information. Methods: The study employs a distributed AI architecture that integrates federated learning for decentralized model training, differential privacy with a privacy budget of ≤1%, and selective homomorphic encryption applied to high-risk operations. The framework is evaluated using multi-site experiments on MIMIC-III datasets and hospital telemetry data ranging from 10,000 to 1,000,000 records. Streaming and incremental data training pipelines are implemented, supported by edge-based feature extraction, microservice isolation, secure aggregation, and tamper-evident MLOps audit trails aligned with HIPAA and GDPR requirements. Results: The results demonstrate that bedside inference achieves diagnostic accuracy of at least 95% with end-to-end decision latency of ≤200 ms and average training latency of 180 ms with ≤25 ms jitter. Incremental and streaming training improved throughput by up to 50% compared to batch retraining, while maintaining real-time accuracy of ≥95%. Differential privacy incurred a utility loss of no more than five percentage points, and homomorphic encryption introduced a controlled 2–3× computational overhead limited to sensitive operations. Operational service-level objectives—≥95% AUROC/accuracy, ≤200 ms P99 latency, and ≤1% privacy loss—were consistently met. Conclusion: The study concludes that combining federated learning, differential privacy, and selective homomorphic encryption within a compliant MLOps framework, regulatory-grade healthcare AI can operate effectively at the bedside without compromising diagnostic accuracy or responsiveness. Recommendations: The study recommends that healthcare AI developers and regulators adopt privacy-by-design architectures that integrate decentralized learning, enforceable compliance controls, and real-time performance guarantees as core system requirements.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Privacy-Preserving Technologies in DataArtificial Intelligence in Healthcare and EducationMachine Learning in Healthcare
Volltext beim Verlag öffnen