SPACE-1: A generative pre-trained model for task-oriented dialog with semi-supervised learning and explicit policy injection W He, Y Dai, Y Zheng, Y Wu, Z Cao, D Liu, P Jiang, M Yang, F Huang, ... Proceedings of the AAAI Conference on Artificial Intelligence 36 (10), 10749 …, 2022 | 128 | 2022 |
UniMSE: Towards unified multimodal sentiment analysis and emotion recognition G Hu, TE Lin, Y Zhao, G Lu, Y Wu, Y Li arXiv preprint arXiv:2211.11256, 2022 | 82 | 2022 |
SPACE-3: Unified dialog model pre-training for task-oriented dialog understanding and generation W He, Y Dai, M Yang, J Sun, F Huang, L Si, Y Li Proceedings of the 45th International ACM SIGIR Conference on Research and …, 2022 | 48 | 2022 |
SPACE-2: Tree-structured semi-supervised contrastive pre-training for task-oriented dialog understanding W He, Y Dai, B Hui, M Yang, Z Cao, J Dong, F Huang, L Si, Y Li arXiv preprint arXiv:2209.06638, 2022 | 28 | 2022 |
SpokenWOZ: A Large-Scale Speech-Text Benchmark for Spoken Task-Oriented Dialogue in Multiple Domains S Si, W Ma, Y Wu, Y Dai, H Gao, TE Lin, H Li, R Yan, F Huang, Y Li arXiv preprint arXiv:2305.13040, 2023 | 16* | 2023 |
Analyzing developer behavior and community structure in software crowdsourcing H Zhang, Y Wu, W Wu Information science and applications, 981-988, 2015 | 16 | 2015 |
Duplex conversation: Towards human-like interaction in spoken dialogue systems TE Lin, Y Wu, F Huang, L Si, J Sun, Y Li Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and …, 2022 | 14 | 2022 |
Empathetic response generation via emotion cause transition graph Y Qian, B Wang, TE Lin, Y Zheng, Y Zhu, D Zhao, Y Hou, Y Wu, Y Li ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and …, 2023 | 13 | 2023 |
A slot is not built in one utterance: Spoken language dialogs with sub-slots S Zhang, Y Hu, Y Wu, J Wu, Y Li, J Sun, C Yuan, X Wang arXiv preprint arXiv:2203.10759, 2022 | 13 | 2022 |
Software crowdsourcing for developing Software-as-a-Service X Xu, W Wu, Y Wang, Y Wu Frontiers of Computer Science 9, 554-565, 2015 | 13 | 2015 |
Unsupervised dialogue topic segmentation with topic-aware utterance representation H Gao, R Wang, TE Lin, Y Wu, M Yang, F Huang, Y Li arXiv preprint arXiv:2305.02747, 2023 | 11* | 2023 |
Cgodial: A large-scale benchmark for chinese goal-oriented dialog evaluation Y Dai, W He, B Li, Y Wu, Z Cao, Z An, J Sun, Y Li arXiv preprint arXiv:2211.11617, 2022 | 10 | 2022 |
Speech-Text Pre-training for Spoken Dialog Understanding with Explicit Cross-Modal Alignment T Yu, H Gao, TE Lin, M Yang, Y Wu, W Ma, C Wang, F Huang, Y Li Proceedings of the 61st Annual Meeting of the Association for Computational …, 2023 | 9* | 2023 |
Fortify the shortest stave in attention: Enhancing context awareness of large language models for effective tool use Y Chen, A Lv, TE Lin, C Chen, Y Wu, F Huang, Y Li, R Yan arXiv preprint arXiv:2312.04455, 2023 | 6 | 2023 |
Masked thought: Simply masking partial reasoning steps can improve mathematical reasoning learning of language models C Chen, X Wang, TE Lin, A Lv, Y Wu, X Gao, JR Wen, R Yan, Y Li arXiv preprint arXiv:2403.02178, 2024 | 5 | 2024 |
A survey on self-evolution of large language models Z Tao, TE Lin, X Chen, H Li, Y Wu, Y Li, Z Jin, F Huang, D Tao, J Zhou arXiv preprint arXiv:2404.14387, 2024 | 4 | 2024 |
Constructive large language models alignment with diverse feedback T Yu, TE Lin, Y Wu, M Yang, F Huang, Y Li arXiv preprint arXiv:2310.06450, 2023 | 4 | 2023 |
Unisa: Unified generative framework for sentiment analysis Z Li, TE Lin, Y Wu, M Liu, F Tang, M Zhao, Y Li Proceedings of the 31st ACM International Conference on Multimedia, 6132-6142, 2023 | 3 | 2023 |
Self-explanation prompting improves dialogue understanding in large language models H Gao, TE Lin, H Li, M Yang, Y Wu, W Ma, Y Li arXiv preprint arXiv:2309.12940, 2023 | 3 | 2023 |
Improving factual consistency of text summarization by adversarially decoupling comprehension and embellishment abilities of llms H Feng, Y Fan, X Liu, TE Lin, Z Yao, Y Wu, F Huang, Y Li, Q Ma arXiv preprint arXiv:2310.19347, 2023 | 2 | 2023 |