Pitfalls in language models for code intelligence: A taxonomy and survey

X She, Y Liu, Y Zhao, Y He, L Li… - arXiv preprint arXiv …, 2023 - arxiv.org
Modern language models (LMs) have been successfully employed in source code
generation and understanding, leading to a significant increase in research focused on …

[HTML][HTML] Adversarial machine learning in industry: A systematic literature review

FV Jedrzejewski, L Thode, J Fischbach, T Gorschek… - Computers & …, 2024 - Elsevier
Abstract Adversarial Machine Learning (AML) discusses the act of attacking and defending
Machine Learning (ML) Models, an essential building block of Artificial Intelligence (AI). ML …

Deep intellectual property protection: A survey

Y Sun, T Liu, P Hu, Q Liao, S Fu, N Yu, D Guo… - arXiv preprint arXiv …, 2023 - arxiv.org
Deep Neural Networks (DNNs), from AlexNet to ResNet to ChatGPT, have made
revolutionary progress in recent years, and are widely used in various fields. The high …

Megex: Data-free model extraction attack against gradient-based explainable ai

T Miura, T Shibahara, N Yanai - Proceedings of the 2nd ACM Workshop …, 2024 - dl.acm.org
Explainable AI encourages machine learning applications in the real world, whereas data-
free model extraction attacks (DFME), in which an adversary steals a trained machine …

Identifying appropriate intellectual property protection mechanisms for machine learning models: A systematization of watermarking, fingerprinting, model access, and …

I Lederer, R Mayer, A Rauber - IEEE Transactions on Neural …, 2023 - ieeexplore.ieee.org
The commercial use of machine learning (ML) is spreading; at the same time, ML models
are becoming more complex and more expensive to train, which makes intellectual property …

Your Transferability Barrier is Fragile: Free-Lunch for Transferring the Non-Transferable Learning

Z Hong, L Shen, T Liu - … of the IEEE/CVF Conference on …, 2024 - openaccess.thecvf.com
Recently non-transferable learning (NTL) was proposed to restrict models' generalization
toward the target domain (s) which serves as state-of-the-art solutions for intellectual …

Model Reconstruction Using Counterfactual Explanations: Mitigating the Decision Boundary Shift

P Dissanayake, S Dutta - arXiv preprint arXiv:2405.05369, 2024 - arxiv.org
Counterfactual explanations find ways of achieving a favorable model outcome with
minimum input perturbation. However, counterfactual explanations can also be exploited to …

Defense Against Model Extraction Attacks on Recommender Systems

S Zhang, H Yin, H Chen, C Long - … Conference on Web Search and Data …, 2024 - dl.acm.org
The robustness of recommender systems has become a prominent topic within the research
community. Numerous adversarial attacks have been proposed, but most of them rely on …

Desiderata for next generation of ML model serving

S Akoush, A Paleyes, A Van Looveren… - arXiv preprint arXiv …, 2022 - arxiv.org
Inference is a significant part of ML software infrastructure. Despite the variety of inference
frameworks available, the field as a whole can be considered in its early days. This position …

Sok: Pitfalls in evaluating black-box attacks

F Suya, A Suri, T Zhang, J Hong… - … IEEE Conference on …, 2024 - ieeexplore.ieee.org
Numerous works study black-box attacks on image classifiers, where adversaries generate
adversarial examples against unknown target models without having access to their internal …