Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable Artificial Intelligence for 6G: Improving Trust between Human and Machine
250
Zitationen
1
Autoren
2020
Jahr
Abstract
As 5G mobile networks are bringing about global societal benefits, the design phase for 6G has started. Evolved 5G and 6G will need sophisticated AI to automate information delivery simultaneously for mass autonomy, human machine interfacing, and targeted healthcare. Trust will become increasingly critical for 6G as it manages a wide range of mission-critical services. As we migrate from traditional mathematical model-dependent optimization to data-dependent deep learning, the insight and trust we have in our optimization modules decrease. This loss of model explainability means we are vulnerable to malicious data, poor neural network design, and the loss of trust from stakeholders and the general public -- all with a range of legal implications. In this review, we outline the core methods of explainable artificial intelligence (XAI) in a wireless network setting, including public and legal motivations, definitions of explainability, performance vs. explainability trade-offs, and XAI algorithms. Our review is grounded in case studies for both wireless PHY and MAC layer optimization and provide the community with an important research area to embark upon.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.401 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.521 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.420 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.340 Zit.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
2015 · 18.527 Zit.