Autonomous, or self-driving, cars are emerging as the solution to several problems primarily caused by humans on roads, such as accidents and traffic congestion. However, those …
Recent years have seen a proliferation of research on adversarial machine learning. Numerous papers demonstrate powerful algorithmic attacks against a wide variety of …
D Olszewski, A Lu, C Stillman, K Warren… - Proceedings of the …, 2023 - dl.acm.org
Reproducibility is crucial to the advancement of science; it strengthens confidence in seemingly contradictory results and expands the boundaries of known discoveries …
To mitigate recent insidious backdoor attacks on deep learning models, advances have been made by the research community. Nonetheless, state-of-the-art defenses are either …
J Wang, Z Zhang, M Wang, H Qiu, T Zhang… - 32nd USENIX Security …, 2023 - usenix.org
Bit-flip attacks (BFAs) have attracted substantial attention recently, in which an adversary could tamper with a small number of model parameter bits to break the integrity of DNNs. To …
This work reveals that standard quantization toolkits can be abused to activate a backdoor. We demonstrate that a full-precision backdoored model which does not have any backdoor …
C Qian, M Zhang, Y Nie, S Lu, H Cao - Electronics, 2023 - mdpi.com
As the machine learning-related technology has made great progress in recent years, deep neural networks are widely used in many scenarios, including security-critical ones, which …
State-of-the-art deep neural networks (DNNs) have been proven to be vulnerable to adversarial manipulation and backdoor attacks. Backdoored models deviate from expected …
We propose Hashtag, the first framework that enables high-accuracy detection of fault- injection attacks on Deep Neural Networks (DNNs) with provable bounds on detection …