Mengzi: Towards lightweight yet ingenious pre-trained models for chinese Z Zhang, H Zhang, K Chen, Y Guo, J Hua, Y Wang, M Zhou arXiv preprint arXiv:2110.06696, 2021 | 64 | 2021 |
Is it Possible to Edit Large Language Models Robustly? X Ma, T Ju, J Qiu, Z Zhang, H Zhao, L Liu, Y Wang arXiv preprint arXiv:2402.05827, 2024 | 4 | 2024 |
Interpreting Key Mechanisms of Factual Recall in Transformer-Based Language Models A Lv, K Zhang, Y Chen, Y Wang, L Liu, JR Wen, J Xie, R Yan arXiv preprint arXiv:2403.19521, 2024 | 1 | 2024 |
Sibyl: Simple yet Effective Agent Framework for Complex Real-world Reasoning Y Wang, T Shen, L Liu, J Xie arXiv preprint arXiv:2407.10718, 2024 | | 2024 |
Flooding Spread of Manipulated Knowledge in LLM-Based Multi-Agent Communities T Ju, Y Wang, X Ma, P Cheng, H Zhao, Y Wang, L Liu, J Xie, Z Zhang, ... arXiv preprint arXiv:2407.07791, 2024 | | 2024 |