A survey of adversarial defenses and robustness in nlp

S Goyal, S Doddapaneni, MM Khapra… - ACM Computing …, 2023 - dl.acm.org
In the past few years, it has become increasingly evident that deep neural networks are not
resilient enough to withstand adversarial perturbations in input data, leaving them …

Safe planning in dynamic environments using conformal prediction

L Lindemann, M Cleaveland, G Shim… - IEEE Robotics and …, 2023 - ieeexplore.ieee.org
We propose a framework for planning in unknown dynamic environments with probabilistic
safety guarantees using conformal prediction. Particularly, we design a model predictive …

Sok: Certified robustness for deep neural networks

L Li, T Xie, B Li - 2023 IEEE symposium on security and privacy …, 2023 - ieeexplore.ieee.org
Great advances in deep neural networks (DNNs) have led to state-of-the-art performance on
a wide range of tasks. However, recent studies have shown that DNNs are vulnerable to …

“real attackers don't compute gradients”: bridging the gap between adversarial ml research and practice

G Apruzzese, HS Anderson, S Dambra… - … IEEE Conference on …, 2023 - ieeexplore.ieee.org
Recent years have seen a proliferation of research on adversarial machine learning.
Numerous papers demonstrate powerful algorithmic attacks against a wide variety of …

On the Robustness of ML-Based Network Intrusion Detection Systems: An Adversarial and Distribution Shift Perspective

M Wang, N Yang, DH Gunasinghe, N Weng - Computers, 2023 - mdpi.com
Utilizing machine learning (ML)-based approaches for network intrusion detection systems
(NIDSs) raises valid concerns due to the inherent susceptibility of current ML models to …

" Get in Researchers; We're Measuring Reproducibility": A Reproducibility Study of Machine Learning Papers in Tier 1 Security Conferences

D Olszewski, A Lu, C Stillman, K Warren… - Proceedings of the …, 2023 - dl.acm.org
Reproducibility is crucial to the advancement of science; it strengthens confidence in
seemingly contradictory results and expands the boundaries of known discoveries …

Text-crs: A generalized certified robustness framework against textual adversarial attacks

X Zhang, H Hong, Y Hong, P Huang, B Wang… - arXiv preprint arXiv …, 2023 - arxiv.org
The language models, especially the basic text classification models, have been shown to
be susceptible to textual adversarial attacks such as synonym substitution and word …

Synthesizing precise static analyzers for automatic differentiation

J Laurel, SB Qian, G Singh, S Misailovic - Proceedings of the ACM on …, 2023 - dl.acm.org
We present Pasado, a technique for synthesizing precise static analyzers for Automatic
Differentiation. Our technique allows one to automatically construct a static analyzer …

Efficient query-based attack against ML-based Android malware detection under zero knowledge setting

P He, Y Xia, X Zhang, S Ji - Proceedings of the 2023 ACM SIGSAC …, 2023 - dl.acm.org
The widespread adoption of the Android operating system has made malicious Android
applications an appealing target for attackers. Machine learning-based (ML-based) Android …

[PDF][PDF] BARS: Local Robustness Certification for Deep Learning based Traffic Analysis Systems.

K Wang, Z Wang, D Han, W Chen, J Yang, X Shi… - NDSS, 2023 - ndss-symposium.org
Deep learning (DL) performs well in many traffic analysis tasks. Nevertheless, the
vulnerability of deep learning weakens the real-world performance of these traffic analyzers …