Adversarial deep ensemble: Evasion attacks and defenses for malware detection

D Li, Q Li - IEEE Transactions on Information Forensics and …, 2020 - ieeexplore.ieee.org
Malware remains a big threat to cyber security, calling for machine learning based malware
detection. While promising, such detectors are known to be vulnerable to evasion attacks …

Random boxes are open-world object detectors

Y Wang, Z Yue, XS Hua… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
We show that classifiers trained with random region proposals achieve state-of-the-art Open-
world Object Detection (OWOD): they can not only maintain the accuracy of the known …

Learn2perturb: an end-to-end feature perturbation learning to improve adversarial robustness

A Jeddi, MJ Shafiee, M Karg… - Proceedings of the …, 2020 - openaccess.thecvf.com
While deep neural networks have been achieving state-of-the-art performance across a
wide variety of applications, their vulnerability to adversarial attacks limits their widespread …

R-LPIPS: An adversarially robust perceptual similarity metric

S Ghazanfari, S Garg, P Krishnamurthy… - arXiv preprint arXiv …, 2023 - arxiv.org
Similarity metrics have played a significant role in computer vision to capture the underlying
semantics of images. In recent years, advanced similarity metrics, such as the Learned …

Random Entangled Tokens for Adversarially Robust Vision Transformer

H Gong, M Dong, S Ma, S Camtepe… - Proceedings of the …, 2024 - openaccess.thecvf.com
Abstract Vision Transformers (ViTs) have emerged as a compelling alternative to
Convolutional Neural Networks (CNNs) in the realm of computer vision showcasing …

Sensitivity analysis of Wasserstein distributionally robust optimization problems

D Bartl, S Drapeau, J Obłój… - Proceedings of the …, 2021 - royalsocietypublishing.org
We consider sensitivity of a generic stochastic optimization problem to model uncertainty.
We take a non-parametric approach and capture model uncertainty using Wasserstein balls …

Adversarial attack generation empowered by min-max optimization

J Wang, T Zhang, S Liu, PY Chen… - Advances in …, 2021 - proceedings.neurips.cc
The worst-case training principle that minimizes the maximal adversarial loss, also known as
adversarial training (AT), has shown to be a state-of-the-art approach for enhancing …

[PDF][PDF] Reachability analysis of deep ReLU neural networks using facet-vertex incidence.

X Yang, TT Johnson, HD Tran, T Yamaguchi, B Hoxha… - HSCC, 2021 - bhoxha.com
ABSTRACT Deep Neural Networks (DNNs) have demonstrated to be powerful machine
learning models for approximating complex functions. In this work, we provide an exact …

Adversarial Training: A Survey

M Zhao, L Zhang, J Ye, H Lu, B Yin, X Wang - arXiv preprint arXiv …, 2024 - arxiv.org
Adversarial training (AT) refers to integrating adversarial examples--inputs altered with
imperceptible perturbations that can significantly impact model predictions--into the training …

Towards better certified segmentation via diffusion models

O Laousy, A Araujo, G Chassagnon, MP Revel… - arXiv preprint arXiv …, 2023 - arxiv.org
The robustness of image segmentation has been an important research topic in the past few
years as segmentation models have reached production-level accuracy. However, like …