Bridging machine learning and logical reasoning by abductive learning WZ Dai, Q Xu, Y Yu, ZH Zhou Advances in Neural Information Processing Systems 32, 2019 | 159 | 2019 |
Backdoor scanning for deep neural networks through k-arm optimization G Shen, Y Liu, G Tao, S An, Q Xu, S Cheng, S Ma, X Zhang International Conference on Machine Learning, 9525-9536, 2021 | 106 | 2021 |
Better trigger inversion optimization in backdoor scanning G Tao, G Shen, Y Liu, S An, Q Xu, S Ma, P Li, X Zhang Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022 | 67 | 2022 |
Model orthogonalization: Class distance hardening in neural networks for better security G Tao, Y Liu, G Shen, Q Xu, S An, Z Zhang, X Zhang 2022 IEEE Symposium on Security and Privacy (SP), 1372-1389, 2022 | 47 | 2022 |
Mirror: Model inversion for deep learning network with high fidelity S An, G Tao, Q Xu, Y Liu, G Shen, Y Yao, J Xu, X Zhang Proceedings of the 29th Network and Distributed System Security Symposium, 2022 | 37 | 2022 |
Tunneling neural perception and logic reasoning through abductive learning WZ Dai, QL Xu, Y Yu, ZH Zhou arXiv preprint arXiv:1802.01173, 2018 | 36 | 2018 |
Flip: A provable defense framework for backdoor mitigation in federated learning K Zhang, G Tao, Q Xu, S Cheng, S An, Y Liu, S Feng, G Shen, PY Chen, ... arXiv preprint arXiv:2210.12873, 2022 | 34 | 2022 |
Constrained optimization with dynamic bound-scaling for effective nlp backdoor defense G Shen, Y Liu, G Tao, Q Xu, Z Zhang, S An, S Ma, X Zhang International Conference on Machine Learning, 19879-19892, 2022 | 31 | 2022 |
Towards feature space adversarial attack Q Xu, G Tao, S Cheng, X Zhang arXiv preprint arXiv:2004.12385, 2020 | 30 | 2020 |
Towards feature space adversarial attack by style perturbation Q Xu, G Tao, S Cheng, X Zhang Proceedings of the AAAI Conference on Artificial Intelligence 35 (12), 10523 …, 2021 | 24 | 2021 |
Trader: Trace divergence analysis and embedding regulation for debugging recurrent neural networks G Tao, S Ma, Y Liu, Q Xu, X Zhang Proceedings of the ACM/IEEE 42nd International Conference on Software …, 2020 | 13 | 2020 |
Beagle: Forensics of deep learning backdoor attack for better defense S Cheng, G Tao, Y Liu, S An, X Xu, S Feng, G Shen, K Zhang, Q Xu, S Ma, ... arXiv preprint arXiv:2301.06241, 2023 | 11 | 2023 |
Elijah: Eliminating backdoors injected in diffusion models via distribution shift S An, SY Chou, K Zhang, Q Xu, G Tao, G Shen, S Cheng, S Ma, PY Chen, ... Proceedings of the AAAI Conference on Artificial Intelligence 38 (10), 10847 …, 2024 | 7 | 2024 |
Medic: Remove model backdoors via importance driven cloning Q Xu, G Tao, J Honorio, Y Liu, S An, G Shen, S Cheng, X Zhang Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023 | 7 | 2023 |
Deck: Model hardening for defending pervasive backdoors G Tao, Y Liu, S Cheng, S An, Z Zhang, Q Xu, G Shen, X Zhang arXiv preprint arXiv:2206.09272, 2022 | 5 | 2022 |
Bounded adversarial attack on deep content features Q Xu, G Tao, X Zhang Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022 | 4 | 2022 |
{PELICAN}: Exploiting Backdoors of Naturally Trained Deep Learning Models In Binary Code Analysis Z Zhang, G Tao, G Shen, S An, Q Xu, Y Liu, Y Ye, Y Wu, X Zhang 32nd USENIX Security Symposium (USENIX Security 23), 2365-2382, 2023 | 3 | 2023 |
A le cam type bound for adversarial learning and applications Q Xu, K Bello, J Honorio 2021 IEEE International Symposium on Information Theory (ISIT), 1164-1169, 2021 | 3 | 2021 |
Remove model backdoors via importance driven cloning Q Xu, G Tao, J Honorio, Y Liu, S An, G Shen, S Cheng, X Zhang IEEE Conference on Computer Vision and Pattern Recognition, 2023 | 2 | 2023 |
D-square-b: Deep distribution bound for natural-looking adversarial attack Q Xu, G Tao, X Zhang arXiv preprint arXiv:2006.07258, 2020 | 2 | 2020 |