Good Artists Copy, Great Artists Steal: Model Extraction Attacks Against Image Translation Models

S Szyller, V Duddu, T Gröndahl, N Asokan - arXiv preprint arXiv …, 2021 - arxiv.org
Machine learning models are typically made available to potential client users via inference
APIs. Model extraction attacks occur when a malicious client uses information gleaned from …

Towards Model Extraction Attacks in GAN-Based Image Translation via Domain Shift Mitigation

D Mi, Y Zhang, LY Zhang, S Hu, Q Zhong… - Proceedings of the …, 2024 - ojs.aaai.org
Model extraction attacks (MEAs) enable an attacker to replicate the functionality of a victim
deep neural network (DNN) model by only querying its API service remotely, posing a …

Practical disruption of image translation deepfake networks

N Ruiz, SA Bargal, C Xie, S Sclaroff - Proceedings of the AAAI …, 2023 - ojs.aaai.org
By harnessing the latest advances in deep learning, image-to-image translation
architectures have recently achieved impressive capabilities. Unfortunately, the growing …

Attack as the best defense: Nullifying image-to-image translation gans via limit-aware adversarial attack

CY Yeh, HW Chen, HH Shuai… - Proceedings of the …, 2021 - openaccess.thecvf.com
Due to the great success of image-to-image (Img2Img) translation GANs, many applications
with ethics issues arise, eg, DeepFake and DeepNude, presenting a challenging problem to …

Adversarial attacks for multi target image translation networks

Z Fang, Y Yang, J Lin, R Zhan - 2020 IEEE International …, 2020 - ieeexplore.ieee.org
Although image translation algorithms such as StarGAN, STGAN, StarGAN-v2, etc. based on
the generative adversarial networks, bring enormous convenience to people's work and life …

[PDF][PDF] Pix2pix gan for image-to-image translation

J Henry, T Natalie, D Madsen - Research Gate Publication, 2021 - researchgate.net
The Pix2Pix Generative Adversarial Network, or GAN, is an approach to training a deep
convolutional neural network for image-to-image translation tasks. The careful configuration …

Stealing Image-to-Image Translation Models With a Single Query

N Spingarn-Eliezer, T Michaeli - arXiv preprint arXiv:2406.00828, 2024 - arxiv.org
Training deep neural networks requires significant computational resources and large
datasets that are often confidential or expensive to collect. As a result, owners tend to protect …

Adversarial self-defense for cycle-consistent GANs

D Bashkirova, B Usman… - Advances in Neural …, 2019 - proceedings.neurips.cc
The goal of unsupervised image-to-image translation is to map images from one domain to
another without the ground truth correspondence between the two domains. State-of-art …

Membership privacy protection for image translation models via adversarial knowledge distillation

SR Alvar, L Wang, J Pei, Y Zhang - arXiv preprint arXiv:2203.05212, 2022 - arxiv.org
Image-to-image translation models are shown to be vulnerable to the Membership Inference
Attack (MIA), in which the adversary's goal is to identify whether a sample is used to train the …

Protecting against image translation deepfakes by leaking universal perturbations from black-box neural networks

N Ruiz, SA Bargal, S Sclaroff - arXiv preprint arXiv:2006.06493, 2020 - arxiv.org
In this work, we develop efficient disruptions of black-box image translation deepfake
generation systems. We are the first to demonstrate black-box deepfake generation …