Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Few-Shot Melanoma Stage Classification with Siamese Networks and ResNet Encoders: A Focus on Data Leakage Prevention
1
Zitationen
3
Autoren
2024
Jahr
Abstract
Melanoma, the deadliest form of skin cancer, necessitates early and accurate staging for effective treatment and improved patient outcomes. Dermoscopic images, offering detailed visualizations of skin lesions, are invaluable for melanoma diagnosis. However, the precise classification of melanoma stages, particularly based on lesion thickness, remains a challenge due to the limited availability of annotated data for specific stages. Traditional deep learning models, often requiring extensive labeled datasets, may not be optimal for this task. This research introduces a novel approach to melanoma stage classification, leveraging few-shot learning with Siamese networks and a ResNet encoder. This methodology addresses the data scarcity issue by enabling the model to learn from limited examples. Siamese networks, renowned for their ability to discern similarities between image pairs, are particularly well-suited for this task. By incorporating a pre-trained ResNet encoder, the model's feature extraction capabilities are significantly enhanced. This improvement contributes to increased accuracy in classifying melanoma stages based on lesion thickness. A critical concern in medical image analysis is data leakage, which can lead to overly optimistic performance estimates and hinder the model's real-world applicability. This issue is addressed through rigorous data handling practices, ensuring the independence of training and validation sets and avoiding data augmentation techniques that could introduce leakage. The approach achieves a promising accuracy of 77% on a limited dataset, demonstrating the potential of few-shot learning in addressing data scarcity challenges in medical image analysis and paving the way for improved melanoma diagnosis and treatment. Additionally, a model trained on an augmented dataset using ImageDataGenerator achieved 88% accuracy but exhibited inconsistencies in evaluation metrics, particularly for minority classes. However, this model exhibited inconsistencies in evaluation metrics, particularly for minority classes, underscoring the importance of avoiding data leakage for reliable performance assessment.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.324 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.189 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.588 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.470 Zit.