Fuzzllm: A novel and universal fuzzing framework for proactively discovering jailbreak vulnerabilities in large language models D Yao*, J Zhang*, IG Harris, M Carlsson ICASSP 2024, 2024 | 35 | 2024 |
MLLM-Protector: Ensuring MLLM's Safety without Hurting Performance R Pi*, T Han*, J Zhang*, Y Xie, R Pan, Q Lian, H Dong, J Zhang, T Zhang EMNLP 2024 (Main), 2024 | 34 | 2024 |
Personalized Visual Instruction Tuning R Pi*, J Zhang*, T Han, J Zhang, R Pan, T Zhang arXiv preprint arXiv:2410.07113, 2024 | 2 | 2024 |
Image Textualization: An Automatic Framework for Creating Accurate and Detailed Image Descriptions R Pi*, J Zhang*, J Zhang, R Pan, Z Chen, T Zhang NeurIPS 2024 (D&B track), 2024 | 2 | 2024 |
CORE: Mitigating Catastrophic Forgetting in Continual Learning through Cognitive Replay J Zhang, Y Fu, Z Peng, D Yao, K He Cogsci 2024 (Oral), 2024 | 2 | 2024 |
FIRST: Teach A Reliable Large Language Model Through Efficient Trustworthy Distillation KS Shum*, M Xu*, J Zhang*, Z Chen, S Diao, H Dong, J Zhang, MO Raza EMNLP 2024 (Main), 2024 | 1 | 2024 |
Bridge-Coder: Unlocking LLMs' Potential to Overcome Language Gaps in Low-Resource Code J Zhang*, J Zhang*, Y Li*, R Pi, R Pan, R Liu, Z Zheng, T Zhang arXiv preprint arXiv:2410.18957, 2024 | | 2024 |