Domain watermark: Effective and harmless dataset copyright protection is closed at hand

J Guo, Y Li, L Wang, ST Xia… - Advances in Neural …, 2024 - proceedings.neurips.cc
The prosperity of deep neural networks (DNNs) is largely benefited from open-source
datasets, based on which users can evaluate and improve their methods. In this paper, we …

When Industrial Radio Security Meets AI: Opportunities and Challenges

W Li, G Chen, X Zhang, N Wang, S Lv… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
The rapid development of artificial intelligence (AI) has brought about revolutionary changes
to industrial wireless networks. Meanwhile, these AI models have also incurred a more …

Federated Learning with New Knowledge: Fundamentals, Advances, and Futures

L Wang, Y Zhao, J Dong, A Yin, Q Li, X Wang… - arXiv preprint arXiv …, 2024 - arxiv.org
Federated Learning (FL) is a privacy-preserving distributed learning approach that is rapidly
developing in an era where privacy protection is increasingly valued. It is this rapid …

SUB-PLAY: Adversarial Policies against Partially Observed Multi-Agent Reinforcement Learning Systems

O Ma, Y Pu, L Du, Y Dai, R Wang, X Liu, Y Wu… - arXiv preprint arXiv …, 2024 - arxiv.org
Recent advances in multi-agent reinforcement learning (MARL) have opened up vast
application prospects, including swarm control of drones, collaborative manipulation by …

Mitigating Deep Reinforcement Learning Backdoors in the Neural Activation Space

S Vyas, C Hicks, V Mavroudis - 2024 IEEE Security and Privacy …, 2024 - ieeexplore.ieee.org
This paper investigates the threat of backdoors in Deep Reinforcement Learning (DRL)
agent policies and proposes a novel method for their detection at runtime. Our study focuses …

Towards Optimal Adversarial Robust Q-learning with Bellman Infinity-error

H Li, Z Zhang, W Luo, C Han, Y Hu, T Guo… - arXiv preprint arXiv …, 2024 - arxiv.org
Establishing robust policies is essential to counter attacks or disturbances affecting deep
reinforcement learning (DRL) agents. Recent studies explore state-adversarial robustness …

Exploring Backdoor Attacks against Large Language Model-based Decision Making

R Jiao, S Xie, J Yue, T Sato, L Wang, Y Wang… - arXiv preprint arXiv …, 2024 - arxiv.org
Large Language Models (LLMs) have shown significant promise in decision-making tasks
when fine-tuned on specific applications, leveraging their inherent common sense and …

Towards Sample-specific Backdoor Attack with Clean Labels via Attribute Trigger

Y Li, M Zhu, J Guo, T Wei, ST Xia, Z Qin - arXiv preprint arXiv:2312.04584, 2023 - arxiv.org
Currently, sample-specific backdoor attacks (SSBAs) are the most advanced and malicious
methods since they can easily circumvent most of the current backdoor defenses. In this …

Backdozer: A Backdoor Detection Methodology for DRL-based Traffic Controllers

Y Wang, W Li, M Alam, M Maniatakos… - Journal on Autonomous …, 2023 - dl.acm.org
While the advent of Deep Reinforcement Learning (DRL) has substantially improved the
efficiency of Autonomous Vehicles (AVs), it makes them vulnerable to backdoor attacks that …