Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A single-graph visualization to reveal hidden explainability patterns of SHAP feature interactions in machine learning for biomedical issues
0
Zitationen
7
Autoren
2025
Jahr
Abstract
In the last decades, the utility of Machine Learning (ML) in the biomedical domain has been demonstrated repeatedly. Their inherent opacity need augmenting ML with explainability techniques. A common practice in model explainability however, is to focus solely on the explanatory values themselves without accounting for both the main and interaction effects. While this approach simplifies interpretation, it potentially overlooks critical medical information since the nature of the interactions may provide clues to the underlying biological mechanisms. This article introduces a novel method for analyzing explanatory values of machine learning (ML) models, in the form of a comprehensive graphical visualization. The method not only emphasises the individual contributions of the features but also gives insights about the interactions they share with one another. Designed for local additive explanation methods, the proposed tool effectively translates the complex and multidimensional nature of these values into an intuitive single-graph format. It offers a clear window into how feature interactions contribute to the overall prediction of the ML model while aiding in the identification of various interaction types, such as mutual attenuation, positive/negative synergies or dominance of one feature over another. This approach provides insights for generating hypotheses, improving the transparency of ML models, particularly in the context of biology and medicine since living organisms are characterised by a multitude of parameters in complex interactions, a complexity that ensures the “stability” and robustness of structures and functions.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.393 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.259 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.688 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.502 Zit.
Autoren
Institutionen
- Centre National de la Recherche Scientifique(FR)
- Inserm(FR)
- Université Fédérale de Toulouse Midi-Pyrénées(FR)
- Université Toulouse III - Paul Sabatier(FR)
- Université Toulouse-I-Capitole(FR)
- Institut de Recherche en Informatique de Toulouse(FR)
- Université Toulouse - Jean Jaurès(FR)
- Institut Polytechnique de Bordeaux(FR)
- Toulouse Mathematics Institute(FR)