Image shortcut squeezing: Countering perturbative availability poisons with compression

Z Liu, Z Zhao, M Larson - International conference on …, 2023 - proceedings.mlr.press
Perturbative availability poisoning (PAP) adds small changes to images to prevent their use
for model training. Current research adopts the belief that practical and effective approaches …

Transferable unlearnable examples

J Ren, H Xu, Y Wan, X Ma, L Sun, J Tang - arXiv preprint arXiv …, 2022 - arxiv.org
With more people publishing their personal data online, unauthorized data usage has
become a serious concern. The unlearnable strategies have been introduced to prevent …

Unlearnable clusters: Towards label-agnostic unlearnable examples

J Zhang, X Ma, Q Yi, J Sang… - Proceedings of the …, 2023 - openaccess.thecvf.com
There is a growing interest in developing unlearnable examples (UEs) against visual privacy
leaks on the Internet. UEs are training samples added with invisible but unlearnable noise …

Unlearnable examples give a false sense of security: Piercing through unexploitable data with learnable examples

W Jiang, Y Diao, H Wang, J Sun, M Wang… - Proceedings of the 31st …, 2023 - dl.acm.org
Safeguarding data from unauthorized exploitation is vital for privacy and security, especially
in recent rampant research in security breach such as adversarial/membership attacks. To …

APBench: A unified benchmark for availability poisoning attacks and defenses

T Qin, X Gao, J Zhao, K Ye, CZ Xu - arXiv preprint arXiv:2308.03258, 2023 - arxiv.org
The efficacy of availability poisoning, a method of poisoning data by injecting imperceptible
perturbations to prevent its use in model training, has been a hot subject of investigation …

Detection and defense of unlearnable examples

Y Zhu, L Yu, XS Gao - Proceedings of the AAAI Conference on Artificial …, 2024 - ojs.aaai.org
Privacy preserving has become increasingly critical with the emergence of social media.
Unlearnable examples have been proposed to avoid leaking personal information on the …

Semantic deep hiding for robust unlearnable examples

R Meng, C Yi, Y Yu, S Yang, B Shen… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Ensuring data privacy and protection has become paramount in the era of deep learning.
Unlearnable examples are proposed to mislead the deep learning models and prevent data …

Multimodal unlearnable examples: Protecting data against multimodal contrastive learning

X Liu, X Jia, Y Xun, S Liang, X Cao - Proceedings of the 32nd ACM …, 2024 - dl.acm.org
Multimodal contrastive learning (MCL) has shown remarkable advances in zero-shot
classification by learning from millions of image-caption pairs crawled from the Internet …

GraphCloak: Safeguarding Task-specific Knowledge within Graph-structured Data from Unauthorized Exploitation

Y Liu, C Fan, X Chen, P Zhou, L Sun - arXiv preprint arXiv:2310.07100, 2023 - arxiv.org
As Graph Neural Networks (GNNs) become increasingly prevalent in a variety of fields, from
social network analysis to protein-protein interaction studies, growing concerns have …

Transferable availability poisoning attacks

Y Liu, M Backes, X Zhang - arXiv preprint arXiv:2310.05141, 2023 - arxiv.org
We consider availability data poisoning attacks, where an adversary aims to degrade the
overall test accuracy of a machine learning model by crafting small perturbations to its …