Neural cleanse: Identifying and mitigating backdoor attacks in neural networks B Wang, Y Yao, S Shan, H Li, B Viswanath, H Zheng, BY Zhao 2019 IEEE symposium on security and privacy (SP), 707-723, 2019 | 1469 | 2019 |
Latent backdoor attacks on deep neural networks Y Yao, H Li, H Zheng, BY Zhao Proceedings of the 2019 ACM SIGSAC conference on computer and communications …, 2019 | 428 | 2019 |
Fawkes: Protecting privacy against unauthorized deep learning models S Shan, E Wenger, J Zhang, H Li, H Zheng, BY Zhao 29th USENIX security symposium (USENIX Security 20), 1589-1604, 2020 | 292 | 2020 |
Wearable microphone jamming Y Chen, H Li, SY Teng, S Nagels, Z Li, P Lopes, BY Zhao, H Zheng Proceedings of the 2020 chi conference on human factors in computing systems …, 2020 | 78 | 2020 |
Blacklight: Scalable defense for neural networks against {Query-Based}{Black-Box} attacks H Li, S Shan, E Wenger, J Zhang, H Zheng, BY Zhao 31st USENIX Security Symposium (USENIX Security 22), 2117-2134, 2022 | 50 | 2022 |
Piracy resistant watermarks for deep neural networks H Li, E Wenger, S Shan, BY Zhao, H Zheng arXiv preprint arXiv:1910.01226, 2019 | 49 | 2019 |
Blacklight: Defending black-box adversarial attacks on deep neural networks H Li, S Shan, E Wenger, J Zhang, H Zheng, BY Zhao arXiv preprint arXiv:2006.14042 3, 2020 | 41 | 2020 |
Persistent and unforgeable watermarks for deep neural networks H Li, E Willson, H Zheng, BY Zhao arXiv preprint arXiv:1910.01226 2, 2019 | 23 | 2019 |
Understanding the effectiveness of ultrasonic microphone jammer Y Chen, H Li, S Nagels, Z Li, P Lopes, BY Zhao, H Zheng arXiv preprint arXiv:1904.08490, 2019 | 16 | 2019 |
Can backdoor attacks survive time-varying models H Li, AN Bhagoji, BY Zhao, H Zheng arXiv preprint arXiv:2206.04677, 2022 | 3 | 2022 |
Demonstrating Wearable Microphone Jamming Y Chen, H Li, SY Teng, S Nagels, Z Li, P Lopes, H Zheng, BY Zhao Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing …, 2020 | | 2020 |