What Makes a Good Dataset for Knowledge Distillation?

L Frank, J Davis - arXiv preprint arXiv:2411.12817, 2024 - arxiv.org
Knowledge distillation (KD) has been a popular and effective method for model
compression. One important assumption of KD is that the teacher's original dataset will also …

Attention-SA: Exploiting Model-approximated Data Semantics for Adversarial Attack

Q Li, Q Hu, H Fan, C Lin, C Shen… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Adversarial Defense of deep neural networks have gained significant attention and there
have been active research efforts on model blind-points for attacking such as gradient …

SoK: Dataset Copyright Auditing in Machine Learning Systems

L Du, X Zhou, M Chen, C Zhang, Z Su, P Cheng… - arXiv preprint arXiv …, 2024 - arxiv.org
As the implementation of machine learning (ML) systems becomes more widespread,
especially with the introduction of larger ML models, we perceive a spring demand for …

Intellectual Property Protection for Deep Learning Model and Dataset Intelligence

Y Jiang, Y Gao, C Zhou, H Hu, A Fu… - arXiv preprint arXiv …, 2024 - arxiv.org
With the growing applications of Deep Learning (DL), especially recent spectacular
achievements of Large Language Models (LLMs) such as ChatGPT and LLaMA, the …

Know2Vec: A Black-Box Proxy for Neural Network Retrieval

Z Shang, Y Liu, J Liu, X Gu, Y Ding, X Ji - arXiv preprint arXiv:2412.16251, 2024 - arxiv.org
For general users, training a neural network from scratch is usually challenging and labor-
intensive. Fortunately, neural network zoos enable them to find a well-performing model for …

Robust Hashing for Neural Network Models via Heterogeneous Graph Representation

L Huang, Y Tao, C Qin, X Zhang - IEEE Signal Processing …, 2024 - ieeexplore.ieee.org
How to protect the intellectual property (IP) of neural network models has become a hot topic
in current research. Model hashing as an important model protection scheme, which …

Take Fake as Real: Realistic-like Robust Black-box Adversarial Attack to Evade AIGC Detection

C Xie, D Ye, Y Zhang, L Tang, Y Lv, J Deng… - arXiv preprint arXiv …, 2024 - arxiv.org
The security of AI-generated content (AIGC) detection based on GANs and diffusion models
is closely related to the credibility of multimedia content. Malicious adversarial attacks can …

Data-Free Knowledge Distillation Using Adversarially Perturbed OpenGL Shader Images

L Frank, J Davis - arXiv preprint arXiv:2310.13782, 2023 - arxiv.org
Knowledge distillation (KD) has been a popular and effective method for model
compression. One important assumption of KD is that the original training dataset is always …