Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A Systematic Review of Artificial Intelligence in Orthopaedic Disease Detection: A Taxonomy for Analysis and Trustworthiness Evaluation
13
Zitationen
8
Autoren
2024
Jahr
Abstract
Orthopaedic diseases, which affect millions of people globally, present significant diagnostic challenges, often leading to long-term disability and chronic pain. There is an ongoing debate across the literature regarding the trustworthiness of artificial intelligence (AI) in detecting orthopaedic diseases. This systematic review aims to provide a comprehensive taxonomy of AI applications in orthopaedic disease detection. A thorough literature search was conducted across five major databases (Science Direct, Scopus, IEEE Xplore, PubMed, and Web of Science) covering publications from January 2019 to 2024. Following rigorous screening on the basis of predefined inclusion criteria, 85 relevant studies were identified and critically evaluated. For the first time, this review classifies AI contributions into six key categories of orthopaedic conditions on the basis of medical perspective: arthritis, tumours, deformities, fractures, osteoporosis, and general bone abnormalities. In addition to analyzing motivations, challenges, and recommendations for future research, this review highlights the various AI techniques employed, including deep learning (DL), machine learning (ML), explainable AI (XAI), fuzzy logic, and multicriteria decision-making (MCDM), as well as the datasets utilized. Furthermore, the trustworthiness of AI models is evaluated on the basis of seven AI trustworthiness components, aligned with European Union guidelines, within each category. These findings underscore the need for high-quality research to ensure that AI computational systems in orthopaedic disease detection are reliable, safe, and ethical. Future research should focus on optimizing AI algorithms, improving dataset diversity, and addressing ethical and regulatory challenges to ensure successful integration into clinical practice.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.316 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.177 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.575 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.468 Zit.
Autoren
Institutionen
- Universiti Sains Malaysia(MY)
- Southern Technical University(IQ)
- A'Sharqiyah University
- Imam Sadiq University(IR)
- University of Information Technology and Communications(IQ)
- Heriot-Watt University Malaysia(MY)
- Universiti Tunku Abdul Rahman(MY)
- International University of Business Agriculture and Technology(BD)