Can Diffusion Model Achieve Better Performance in Text Generation? Bridging the Gap between Training and Inference! Z Tang, P Wang, K Zhou, J Li, Z Cao, M Zhang arXiv preprint arXiv:2305.04465, 2023 | 10 | 2023 |
Detoxify Language Model Step-by-Step Z Tang, K Zhou, P Wang, Y Ding, J Li arXiv preprint arXiv:2308.08295, 2023 | 7 | 2023 |
UFNRec: Utilizing False Negative Samples for Sequential Recommendation X Liu, C Liu, P Wang, R Zheng, L Zhang, L Lin, Z Chen, L Fu Proceedings of the 2023 SIAM International Conference on Data Mining (SDM …, 2023 | 6 | 2023 |
Rethinking Negative Instances for Generative Named Entity Recognition Y Ding, J Li, P Wang, Z Tang, B Yan, M Zhang arXiv preprint arXiv:2402.16602, 2024 | 4 | 2024 |
Future Augmentation with Self-distillation in Recommendation C Liu, R Xie, X Liu, P Wang, R Zheng, L Zhang, J Li, F Xia, L Lin Joint European Conference on Machine Learning and Knowledge Discovery in …, 2023 | 1 | 2023 |
OpenBA: An Open-sourced 15B Bilingual Asymmetric seq2seq Model Pre-trained from Scratch J Li, Z Tang, Y Ding, P Wang, P Guo, W You, D Qiao, W Chen, G Fu, ... arXiv preprint arXiv:2309.10706, 2023 | | 2023 |