Dnn-defender: An in-dram deep neural network defense mechanism for adversarial weight attack

R Zhou, S Ahmed, A Siraj Rakin, S Angizi - arXiv e-prints, 2023 - ui.adsabs.harvard.edu
With deep learning deployed in many security-sensitive areas, machine learning security is
becoming progressively important. Recent studies demonstrate attackers can exploit system-
level techniques exploiting the RowHammer vulnerability of DRAM to deterministically and
precisely flip bits in Deep Neural Networks (DNN) model weights to affect inference
accuracy. The existing defense mechanisms are software-based, such as weight
reconstruction requiring expensive training overhead or performance degradation. On the …
以上显示的是最相近的搜索结果。 查看全部搜索结果