Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable Vision Transformers for Vein Biometric Recognition
8
Zitationen
4
Autoren
2024
Jahr
Abstract
In the field of deep learning, understanding the rationale behind an automatic system’s decisions is essential for building users’ trust and ensuring accountability. In this regard, explainable artificial intelligence (XAI) recently emerged as a valuable tool to offer insights into a model behavior. The present study focuses on vein-based biometric recognition, investigating techniques allowing to identify which regions of a wrist-vein image are mostly exploited to carry out a verification process. Toward this aim, our research exploits vision transformers (ViTs), which rely on self-attention mechanisms to automatically detect and exploit the input parts with the content deemed most relevant for its further processing. Two distinct wrist-vein pattern datasets, namely PUT-wrist and FYO-wrist, are employed to fine-tune the considered models. Their behavior is interpreted by analyzing the attention maps generated when applying the trained networks to vein-pattern images, investigating which regions are exploited to decide a user’s identity. The proposed approach testifies that the performed recognition process can improve when a ViT focuses on areas with significant vein pattern content, achieving verification performance surpassing state-of-the-art methods in open-set scenarios, while promoting transparency through explainability.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.452 Zit.
Generative Adversarial Nets
2023 · 19.843 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.258 Zit.
"Why Should I Trust You?"
2016 · 14.307 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.136 Zit.