Deep neural networks obtain state-of-the-art performance on a series of tasks. However, they are easily fooled by adding a small adversarial perturbation to the input. The …
F Tramer - International Conference on Machine Learning, 2022 - proceedings.mlr.press
Making classifiers robust to adversarial examples is challenging. Thus, many works tackle the seemingly easier task of detecting perturbed inputs. We show a barrier towards this goal …
Machine Learning (ML) models are applied in a variety of tasks such as network intrusion detection or Malware classification. Yet, these models are vulnerable to a class of malicious …
Cyber-security is the practice of protecting computing systems and networks from digital attacks, which are a rising concern in the Information Age. With the growing pace at which …
Despite the impressive performances reported by deep neural networks in different application domains, they remain largely vulnerable to adversarial examples, ie, input …
While the literature on security attacks and defenses of Machine Learning (ML) systems mostly focuses on unrealistic adversarial examples, recent research has raised concern …
U Jang, X Wu, S Jha - Proceedings of the 33rd Annual Computer …, 2017 - dl.acm.org
Fueled by massive amounts of data, models produced by machine-learning (ML) algorithms are being used in diverse domains where security is a concern, such as, automotive …
The increasing abundance of large high-quality datasets, combined with significant technical advances over the last several decades have made machine learning into a major …
Abstract Machine learning models are known to lack robustness against inputs crafted by an adversary. Such adversarial examples can, for instance, be derived from regular inputs by …