OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 30.03.2026, 00:12

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

EXPLAINABLE PREDICTIVE ANALYTICS FOR SMART HEALTHCARE USING A MODULAR HYBRID INTELLIGENCE FRAMEWORK

2025·0 Zitationen·MATTER International Journal of Science and TechnologyOpen Access
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2025

Jahr

Abstract

The exponential growth of healthcare data – driven by electronic health records (EHRs), wearable sensors, and continuous remote monitoring systems – has created both immense opportunities and complex challenges for modern clinical decision-making. Effectively harnessing this heterogeneous, high-volume, and high-velocity data requires intelligent systems that can not only deliver accurate predictions but also provide interpretable insights to support clinician trust and patient safety. In this paper, a modular hybrid computational intelligence framework designed to advance personalized, real-time healthcare analytics, is proposed. Our approach synergistically integrates deep learning for high-dimensional feature extraction, fuzzy inference systems for transparent reasoning under uncertainty, and genetic algorithms for adaptive optimization. This tri-layered architecture enables the system to learn from multimodal data sources, including physiological signals (e.g., ECG, glucose levels), structured clinical records, and unstructured patient-reported outcomes, to predict critical health risks such as cardiac arrhythmias, myocardial infarction, and diabetes-related complications. Through experimentation on publicly available real-world datasets, the proposed framework demonstrates superior predictive accuracy, enhanced interpretability, and computational efficiency compared to conventional machine learning and deep learning baselines. Importantly, the inclusion of fuzzy logic modules allows clinicians to trace back the reasoning paths of the system, addressing the growing demand for Explainable AI (XAI) in regulated healthcare environments. This research bridges the longstanding gap between model performance and transparency by offering a scalable and modular solution that is adaptable to diverse clinical contexts. By supporting proactive risk stratification and timely interventions, the framework has the potential to transform reactive care models into intelligent, preventative, and patient-centric healthcare delivery systems.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Machine Learning in HealthcareArtificial Intelligence in HealthcareArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen