Is chatgpt good at search? investigating large language models as re-ranking agent W Sun, L Yan, X Ma, P Ren, D Yin, Z Ren arXiv preprint arXiv:2304.09542, 2023 | 190* | 2023 |
PROP: Pre-training with representative words prediction for ad-hoc retrieval X Ma, J Guo, R Zhang, Y Fan, X Ji, X Cheng Proceedings of the 14th ACM International Conference on Web Search and Data …, 2021 | 97 | 2021 |
Pre-training methods in information retrieval Y Fan, X Xie, Y Cai, J Chen, X Ma, X Li, R Zhang, J Guo Foundations and Trends® in Information Retrieval 16 (3), 178-317, 2022 | 67 | 2022 |
B-PROP: bootstrapped pre-training with representative words prediction for ad-hoc retrieval X Ma, J Guo, R Zhang, Y Fan, Y Li, X Cheng Proceedings of the 44th International ACM SIGIR Conference on Research and …, 2021 | 57 | 2021 |
Pre-train a discriminative text encoder for dense retrieval via contrastive span prediction X Ma, J Guo, R Zhang, Y Fan, X Cheng Proceedings of the 45th International ACM SIGIR Conference on Research and …, 2022 | 42 | 2022 |
A linguistic study on relevance modeling in information retrieval Y Fan, J Guo, X Ma, R Zhang, Y Lan, X Cheng Proceedings of the Web Conference 2021, 1053-1064, 2021 | 13 | 2021 |
Scattered or connected? an optimized parameter-efficient tuning approach for information retrieval X Ma, J Guo, R Zhang, Y Fan, X Cheng Proceedings of the 31st ACM International Conference on Information …, 2022 | 9 | 2022 |
Instruction distillation makes large language models efficient zero-shot rankers W Sun, Z Chen, X Ma, L Yan, S Wang, P Ren, Z Chen, D Yin, Z Ren arXiv preprint arXiv:2311.01555, 2023 | 7 | 2023 |
A contrastive pre-training approach to discriminative autoencoder for dense retrieval X Ma, R Zhang, J Guo, Y Fan, X Cheng Proceedings of the 31st ACM International Conference on Information …, 2022 | 7 | 2022 |
Pre-training with aspect-content text mutual prediction for multi-aspect dense retrieval X Sun, K Bi, J Guo, X Ma, Y Fan, H Shan, Q Zhang, Z Liu Proceedings of the 32nd ACM International Conference on Information and …, 2023 | 3 | 2023 |
The butterfly effect of model editing: Few edits can trigger large language models collapse W Yang, F Sun, X Ma, X Liu, D Yin, X Cheng arXiv preprint arXiv:2402.09656, 2024 | 2 | 2024 |
The Fall of ROME: Understanding the Collapse of LLMs in Model Editing W Yang, F Sun, J Tan, X Ma, D Su, D Yin, H Shen arXiv preprint arXiv:2406.11263, 2024 | | 2024 |