Diffusion models for imperceptible and transferable adversarial attack

J Chen, H Chen, K Chen, Y Zhang… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Many existing adversarial attacks generate-norm perturbations on image RGB space.
Despite some achievements in transferability and attack success rate, the crafted adversarial …

A comprehensive study on the robustness of deep learning-based image classification and object detection in remote sensing: Surveying and benchmarking

S Mei, J Lian, X Wang, Y Su, M Ma… - Journal of Remote …, 2024 - spj.science.org
Deep neural networks (DNNs) have found widespread applications in interpreting remote
sensing (RS) imagery. However, it has been demonstrated in previous works that DNNs are …

On the design fundamentals of diffusion models: A survey

Z Chang, GA Koulieris, HPH Shum - arXiv preprint arXiv:2306.04542, 2023 - arxiv.org
Diffusion models are generative models, which gradually add and remove noise to learn the
underlying distribution of training data for data generation. The components of diffusion …

Accelerated optimization in deep learning with a proportional-integral-derivative controller

S Chen, J Liu, P Wang, C Xu, S Cai, J Chu - Nature Communications, 2024 - nature.com
High-performance optimization algorithms are essential in deep learning. However,
understanding the behavior of optimization (ie, learning process) remains challenging due …

Diffattack: Evasion attacks against diffusion-based adversarial purification

M Kang, D Song, B Li - Advances in Neural Information …, 2024 - proceedings.neurips.cc
Diffusion-based purification defenses leverage diffusion models to remove crafted
perturbations of adversarial examples and achieve state-of-the-art robustness. Recent …

Extraction and recovery of spatio-temporal structure in latent dynamics alignment with diffusion models

Y Wang, Z Wu, C Li, A Wu - Advances in Neural Information …, 2023 - proceedings.neurips.cc
In the field of behavior-related brain computation, it is necessary to align raw neural signals
against the drastic domain shift among them. A foundational framework within neuroscience …

Can Protective Perturbation Safeguard Personal Data from Being Exploited by Stable Diffusion?

Z Zhao, J Duan, K Xu, C Wang… - Proceedings of the …, 2024 - openaccess.thecvf.com
Stable Diffusion has established itself as a foundation model in generative AI artistic
applications receiving widespread research and application. Some recent fine-tuning …

Sa-attack: Improving adversarial transferability of vision-language pre-training models via self-augmentation

B He, X Jia, S Liang, T Lou, Y Liu, X Cao - arXiv preprint arXiv:2312.04913, 2023 - arxiv.org
Current Visual-Language Pre-training (VLP) models are vulnerable to adversarial examples.
These adversarial examples present substantial security risks to VLP models, as they can …

On the robustness of latent diffusion models

J Zhang, Z Xu, S Cui, C Meng, W Wu… - arXiv preprint arXiv …, 2023 - arxiv.org
Latent diffusion models achieve state-of-the-art performance on a variety of generative tasks,
such as image synthesis and image editing. However, the robustness of latent diffusion …

Toward effective protection against diffusion-based mimicry through score distillation

H Xue, C Liang, X Wu, Y Chen - The Twelfth International …, 2023 - openreview.net
While generative diffusion models excel in producing high-quality images, they can also be
misused to mimic authorized images, posing a significant threat to AI systems. Efforts have …