On the design fundamentals of diffusion models: A survey

Z Chang, GA Koulieris, HPH Shum - arXiv preprint arXiv:2306.04542, 2023 - arxiv.org
Diffusion models are generative models, which gradually add and remove noise to learn the
underlying distribution of training data for data generation. The components of diffusion …

1-Lipschitz Layers Compared: Memory Speed and Certifiable Robustness

B Prach, F Brau, G Buttazzo… - Proceedings of the …, 2024 - openaccess.thecvf.com
The robustness of neural networks against input perturbations with bounded magnitude
represents a serious concern in the deployment of deep learning models in safety-critical …

Efficient adversarial training in llms with continuous attacks

S Xhonneux, A Sordoni, S Günnemann, G Gidel… - arXiv preprint arXiv …, 2024 - arxiv.org
Large language models (LLMs) are vulnerable to adversarial attacks that can bypass their
safety guardrails. In many domains, adversarial training has proven to be one of the most …

Soft prompt threats: Attacking safety alignment and unlearning in open-source llms through the embedding space

L Schwinn, D Dobre, S Xhonneux, G Gidel… - arXiv preprint arXiv …, 2024 - arxiv.org
Current research in adversarial robustness of LLMs focuses on discrete input manipulations
in the natural language space, which can be directly transferred to closed-source models …

Generalized Synchronized Active Learning for Multi-Agent-Based Data Selection on Mobile Robotic Systems

S Schmidt, L Stappen, L Schwinn… - IEEE Robotics and …, 2024 - ieeexplore.ieee.org
In mobile robotics, perception in uncontrolled environments like autonomous driving is a
central hurdle. Existing active learning frameworks can help enhance perception by …

Is Certifying Robustness Still Worthwhile?

R Mangal, K Leino, Z Wang, K Hu, W Yu… - arXiv preprint arXiv …, 2023 - arxiv.org
Over the years, researchers have developed myriad attacks that exploit the ubiquity of
adversarial examples, as well as defenses that aim to guard against the security …

Large-Scale Dataset Pruning in Adversarial Training through Data Importance Extrapolation

B Nieth, T Altstidl, L Schwinn, B Eskofier - arXiv preprint arXiv:2406.13283, 2024 - arxiv.org
Their vulnerability to small, imperceptible attacks limits the adoption of deep learning models
to real-world systems. Adversarial training has proven to be one of the most promising …

Intriguing Properties of Robust Classification

B Prach, CH Lampert - arXiv preprint arXiv:2412.04245, 2024 - arxiv.org
Despite extensive research since the community learned about adversarial examples 10
years ago, we still do not know how to train high-accuracy classifiers that are guaranteed to …

Test-time Backdoor Attack Using Universal Perturbation

JA Smith - 2024 - rave.ohiolink.edu
The rapid growth and widespread reliance on machine learning (ML) systems across critical
applications such as healthcare, autonomous driving, and cybersecurity have un-derscored …