Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A clinical benchmark of public self-supervised pathology foundation models
48
Zitationen
17
Autoren
2025
Jahr
Abstract
The use of self-supervised learning to train pathology foundation models has increased substantially in the past few years. Notably, several models trained on large quantities of clinical data have been made publicly available in recent months. This will significantly enhance scientific research in computational pathology and help bridge the gap between research and clinical deployment. With the increase in availability of public foundation models of different sizes, trained using different algorithms on different datasets, it becomes important to establish a benchmark to compare the performance of such models on a variety of clinically relevant tasks spanning multiple organs and diseases. In this work, we present a collection of pathology datasets comprising clinical slides associated with clinically relevant endpoints including cancer diagnoses and a variety of biomarkers generated during standard hospital operation from three medical centers. We leverage these datasets to systematically assess the performance of public pathology foundation models and provide insights into best practices for training foundation models and selecting appropriate pretrained models. To enable the community to evaluate their models on our clinical datasets, we make available an automated benchmarking pipeline for external use.
Ähnliche Arbeiten
A survey on deep learning in medical image analysis
2017 · 13.591 Zit.
Dermatologist-level classification of skin cancer with deep neural networks
2017 · 13.195 Zit.
A survey on Image Data Augmentation for Deep Learning
2019 · 11.813 Zit.
QuPath: Open source software for digital pathology image analysis
2017 · 8.198 Zit.
Radiomics: Images Are More than Pictures, They Are Data
2015 · 8.024 Zit.