Attacking deep reinforcement learning with decoupled adversarial policy K Mo, W Tang, J Li, X Yuan IEEE Transactions on Dependable and Secure Computing 20 (1), 758-768, 2022 | 71 | 2022 |
Querying little is enough: model inversion attack via latent information K Mo, T Huang, X Xiang Machine Learning for Cyber Security: Third International Conference, ML4CS …, 2020 | 37 | 2020 |
An efficient adversarial example generation algorithm based on an accelerated gradient iterative fast gradient J Liu, Q Zhang, K Mo, X Xiang, J Li, D Cheng, R Gao, B Liu, K Chen, ... Computer Standards & Interfaces 82, 103612, 2022 | 33 | 2022 |
Sender anonymity: Applying ring signature in gateway-based blockchain for IoT is not enough ASV Koe, S Ai, P Huang, A Yan, J Tang, Q Chen, K Mo, W Jie, S Zhang Information Sciences 606, 60-71, 2022 | 12 | 2022 |
ESM: Selfish mining under ecological footprint S Ai, G Yang, C Chen, K Mo, W Lv, ASV Koe Information Sciences 606, 601-613, 2022 | 5 | 2022 |
Security and Privacy Issues in Deep Reinforcement Learning: Threats and Countermeasures K Mo, P Ye, X Ren, S Wang, W Li, J Li ACM Computing Surveys 56 (6), 1-39, 2024 | 1 | 2024 |
Empirical study of privacy inference attack against deep reinforcement learning models H Zhou, K Mo, T Huang, Y Li Connection Science 35 (1), 2211240, 2023 | 1 | 2023 |