Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
9 | THE ROLE OF ARTIFICIAL INTELLIGENCE IN IMAGING READINGS
0
Zitationen
1
Autoren
2025
Jahr
Abstract
Artificial intelligence (AI) is gaining ground in medical imaging thanks to the increasing availability of open datasets and shared deep learning models. In the context of imaging readings, it can mainly serve two purposes. The first is to automate the detection of abnormalities and the extraction of quantitative features from the images. The second is to predict the future of the patient based on image content possibly supplemented by clinical, pathological and/or biological information. In this talk, we will show that AI can already be used to automate a number of tedious tasks often prone to intra- and inter-reader variability, such as lesion detection and segmentation from whole-body [18F]-FDG PET/CT images. This enables automated calculation of prognostic biomarkers from these images, such as the total metabolically active tumor volume, and exploration of the prognostic or predictive values of numerous candidate radiomic biomarkers. We will also discuss the variability between different AI algorithms, requiring the establishment of benchmarks to determine the performance of each AI algorithm and its compliance with interpretation rules agreed by medical experts. In a second part, we will present the challenging task of predicting treatment response or patient outcome based on image readings. We'll explain how AI can help make the most of image content. The differences between using end-to-end deep learning and using radiomic features associated with machine learning will be explained, highlighting the advantages and limitations of each approach for prediction tasks. In addition to medical images, the inclusion of non-imaging data in prognostic and predictive models may be necessary to improve performance. We will illustrate how this can be achieved. The challenges associated with using AI for inference will be described based on examples from the literature and our own experience. Keywords: diagnostic and prognostic biomarkers; PET-CT; risk models No potential sources of conflict of interest.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.380 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.243 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.671 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.496 Zit.