Toward interpretable anomaly detection for autonomous vehicles with denoising variational transformer

H Min, X Lei, X Wu, Y Fang, S Chen, W Wang… - … Applications of Artificial …, 2024 - Elsevier
H Min, X Lei, X Wu, Y Fang, S Chen, W Wang, X Zhao
Engineering Applications of Artificial Intelligence, 2024Elsevier
Efficient anomaly detection is crucial to ensure safe operation of Autonomous vehicles
(AVs). This study proposes an interpretable method for detecting anomalies in AV data
based on denoising variational Transformer (DVT). Considering the limitation of simple
structure, poor learning ability of typical autoencoder network, and inability of learning data
distribution of traditional self-attention mechanism, current work develops an innovative
variational attention mechanism by incorporating Gaussian prior distribution into attention …
Abstract
Efficient anomaly detection is crucial to ensure safe operation of Autonomous vehicles (AVs). This study proposes an interpretable method for detecting anomalies in AV data based on denoising variational Transformer (DVT). Considering the limitation of simple structure, poor learning ability of typical autoencoder network, and inability of learning data distribution of traditional self-attention mechanism, current work develops an innovative variational attention mechanism by incorporating Gaussian prior distribution into attention weight. Further, based on proposed variational attention mechanism, an unsupervised anomaly detection network model for DVT is constructed, which can learn distribution of input and reconstruct original input as output to construct residuals for anomaly detection. Additionally, a residual interpreter is designed to calculate contribution of each input feature to anomaly score to explain anomaly detection results. By employing proposed model, an AV data anomaly detection framework is constructed. The proposed model is experimentally verified using simulation platform and real AV platform, and experimental results demonstrate that proposed DVT model outperforms other unsupervised anomaly methods, improving F1-score value by 8%–50% and area under curve index by 1%–20%. Moreover, the average interpretation accuracy of proposed residual interpreter surpasses that of kernel shapley additive explanations by more than 50%, while significantly reducing computational time by over 99%.
Elsevier
以上显示的是最相近的搜索结果。 查看全部搜索结果