Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Real-Time AI–Driven Decision Intelligence For Enterprise Healthcare Operations: Cloud-Native Architecture, Workflow Integration, And Measured Operational Outcomes
0
Zitationen
6
Autoren
2026
Jahr
Abstract
Background Healthcare enterprises are being more reliant on data-driven decision-making to handle the large-scale clinical and administrative operations of appointment adherence management, utilization review, care coordination, and claims processing. Conventional batch-driven analytics systems add latency, lower workflow responsiveness, and have low scalability with workloads of the scale of the enterprise. The opportunity to operationalize real-time decision intelligence in healthcare settings is provided by the emergence of cloud-native, event-driven architectures and deployable machine learning (ML) systems. Objective The objective of this research was to design, deploy and test a cloud-native, real-time AI-based decision intelligence system with the capability of enhancing operational activities, predictive precision, and infrastructure scaling of enterprise health care processes. To measure performance improvement, the system was compared to a legacy system that operated on a batch-processing basis. Methods An event-driven, distributed architecture was created that encompassed streaming data ingestion, real-time feature engineering, containerized ML inference services, workflow orchestration, and enterprise level observability controls. The data used to model workflows related to enterprise appointment management was an open-source healthcare operations dataset (medical appointment no-show dataset; 110,527 records). The computations of realtime features were based on scheduling and clinical variables and a supervised ML classifier was deployed as a scalable inference microservice to forecast no-show probability. The proposed real-time architecture and a batchbased rule engine were compared based on operational cycles of the same kind. The key performance indicators were the decision latency (milliseconds), the workflow throughput (decisions/hour), predictive discrimination (area under the receiver operating characteristic curve [AUC], precision, recall, F1 score), infrastructure utilization efficiency, and cost per 10,000 decisions processed. The stress test was conducted at 150 per cent of the estimated peak load to determine the stability of the systems. Results Compared to the baseline of a batch AI architecture, the real-time AI architecture achieved a 52% (1,800 ms vs 860 ms) reduction in median decision latency, and a reduction in 99th percentile latency by 61%. The workflow throughput was also 41 percent (120,000 vs. 169,000 decisions/hour). The predictive performance was also enhanced, as AUC (0.74) to (ML model) was 0.88, and the precision score (0.62) to (0.79), recall (0.58) to (0.76), and F1 score (0.60) to (0.77). Auto-scaling infrastructure minimized the cost per 10,000 decisions by 28 percent and removed service level agreement violations during peak-load tests. Under nominal conditions, feature computation latency was less than 300 ms and at the 99 th percentile in stress testing, feature computation latency was less than 250 ms.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.693 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.598 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.124 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.871 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.