Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Population-specific calibration and validation of an open-source bone age AI
2
Zitationen
17
Autoren
2025
Jahr
Abstract
Assessing skeletal maturity through bone age (BA) evaluation is crucial for monitoring children's growth and guiding treatments, such as hormonal therapy and orthopedic interventions. In recent years, artificial intelligence (AI) methods have been developed to automate BA assessment. However, bone growth patterns may vary by ancestry, and many AI models are trained on limited population datasets, raising concerns about their applicability to populations not included in the training process. To address this shortcoming for the case of the Georgian population, we retrospectively collected 381 pediatric hand X-rays and established a manual BA reference rating from seven local pediatric radiologists and endocrinologists. We then used a subset of 121 images to perform a sex-specific linear calibration of the open-source AI, Deeplasia, creating Deeplasia-GE. On the held-out test set (n = 260), the default version of Deeplasia achieved a mean absolute difference (MAD) of 6.57 months, which improved to 5.69 months after calibration. We observed that the default Deeplasia overestimates the BA in the Georgian cohort with a signed mean difference (SMD) of + 2.85 and + 5.35 months for females and males respectively, which after calibration is significantly reduced to -0.03 and + 0.58 months for females and males, respectively. We find that Deeplasia-GE has a smaller error than all the raters and, by design, Deeplasia-GE inherits the high test-retest reliability from Deeplasia. These findings suggest that Deeplasia-GE is a reliable AI-based BA assessment method for Georgian children.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.339 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.211 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.614 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.478 Zit.
Autoren
Institutionen
- University Hospital Bonn(DE)
- German Center for Neurodegenerative Diseases(DE)
- Tbilisi State Medical University(GE)
- Monash Children’s Hospital(AU)
- Georgian American University(GE)
- David Tvildiani Medical University(GE)
- Ilia State University(GE)
- National Cancer Center of Georgia(GE)
- Georgian Technical University(GE)