Machine learning for intrusion detection in industrial control systems: Applications, challenges, and recommendations

MA Umer, KN Junejo, MT Jilani, AP Mathur - International Journal of …, 2022 - Elsevier
Methods from machine learning are used in the design of secure Industrial Control Systems.
Such methods focus on two major areas: detection of intrusions at the network level using …

Software verification and validation of safe autonomous cars: A systematic literature review

N Rajabli, F Flammini, R Nardone, V Vittorini - IEEE Access, 2020 - ieeexplore.ieee.org
Autonomous, or self-driving, cars are emerging as the solution to several problems primarily
caused by humans on roads, such as accidents and traffic congestion. However, those …

“real attackers don't compute gradients”: bridging the gap between adversarial ml research and practice

G Apruzzese, HS Anderson, S Dambra… - … IEEE Conference on …, 2023 - ieeexplore.ieee.org
Recent years have seen a proliferation of research on adversarial machine learning.
Numerous papers demonstrate powerful algorithmic attacks against a wide variety of …

" Get in Researchers; We're Measuring Reproducibility": A Reproducibility Study of Machine Learning Papers in Tier 1 Security Conferences

D Olszewski, A Lu, C Stillman, K Warren… - Proceedings of the …, 2023 - dl.acm.org
Reproducibility is crucial to the advancement of science; it strengthens confidence in
seemingly contradictory results and expands the boundaries of known discoveries …

Ntd: Non-transferability enabled deep learning backdoor detection

Y Li, H Ma, Z Zhang, Y Gao, A Abuadbba… - IEEE Transactions …, 2023 - ieeexplore.ieee.org
To mitigate recent insidious backdoor attacks on deep learning models, advances have
been made by the research community. Nonetheless, state-of-the-art defenses are either …

Aegis: Mitigating targeted bit-flip attacks against deep neural networks

J Wang, Z Zhang, M Wang, H Qiu, T Zhang… - 32nd USENIX Security …, 2023 - usenix.org
Bit-flip attacks (BFAs) have attracted substantial attention recently, in which an adversary
could tamper with a small number of model parameter bits to break the integrity of DNNs. To …

Quantization backdoors to deep learning commercial frameworks

H Ma, H Qiu, Y Gao, Z Zhang… - … on Dependable and …, 2023 - ieeexplore.ieee.org
This work reveals that standard quantization toolkits can be abused to activate a backdoor.
We demonstrate that a full-precision backdoored model which does not have any backdoor …

[HTML][HTML] A survey of bit-flip attacks on deep neural network and corresponding defense methods

C Qian, M Zhang, Y Nie, S Lu, H Cao - Electronics, 2023 - mdpi.com
As the machine learning-related technology has made great progress in recent years, deep
neural networks are widely used in many scenarios, including security-critical ones, which …

[PDF][PDF] Toward realistic backdoor injection attacks on dnns using rowhammer

MC Tol, S Islam, B Sunar, Z Zhang - arXiv preprint arXiv …, 2022 - researchgate.net
State-of-the-art deep neural networks (DNNs) have been proven to be vulnerable to
adversarial manipulation and backdoor attacks. Backdoored models deviate from expected …

Hashtag: Hash signatures for online detection of fault-injection attacks on deep neural networks

M Javaheripi, F Koushanfar - 2021 IEEE/ACM International …, 2021 - ieeexplore.ieee.org
We propose Hashtag, the first framework that enables high-accuracy detection of fault-
injection attacks on Deep Neural Networks (DNNs) with provable bounds on detection …