Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Neuralsentinel: Safeguarding Neural Network Reliability and Trustworthiness
1
Zitationen
4
Autoren
2024
Jahr
Abstract
The usage of Artificial Intelligence (AI) systems has increased exponentially, thanks to their ability to reduce the amount of data to be analyzed, the user efforts and preserving a high rate of accuracy. However, introducing this new element in the loop has converted them into attacked points that can compromise the reliability of the systems. This new scenario has raised crucial challenges regarding the reliability and trustworthiness of the AI models, as well as about the uncertainties in their response decisions, becoming even more crucial when applied in critical domains such as healthcare, chemical, electrical plants, etc. To contain these issues, in this paper, we present NeuralSentinel (NS), a tool able to validate the reliability and trustworthiness of AI models. This tool combines attack and defence strategies and explainability concepts to stress an AI model and help non-expert staff increase their confidence in this new system by understanding the model decisions. NS provide a simple and easy-to-use interface for helping humans in the loop dealing with all the needed information. This tool was deployed and used in a Hackathon event to evaluate the reliability of a skin cancer image detector. During the event, experts and non-experts attacked and defended the detector, learning which factors were the most important for model misclassification and which techniques were the most efficient. The event was also used to detect NS’s limitations and gather feedback for further improvements.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.396 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.505 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.400 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.334 Zit.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
2015 · 18.524 Zit.