Can Generative Pre-trained Language Models Serve as Knowledge Bases for Closed-book QA? C Wang, P Liu, Y Zhang ACL 2021, 2021 | 72 | 2021 |
Exploring generative neural temporal point process H Lin, L Wu, G Zhao, P Liu, SZ Li TMLR, 2022 | 14 | 2022 |
Open Information Extraction from 2007 to 2022--A Survey P Liu, W Gao, W Dong, S Huang, Y Zhang arXiv preprint arXiv:2208.08690, 2022 | 5 | 2022 |
QiaoNing at SemEval-2020 Task 4: Commonsense Validation and Explanation system based on ensemble of language model P Liu SemEval 2020, 2020 | 3 | 2020 |
A Survey on Open Information Extraction from Rule-based Model to Large Language Model YZ Pai Liu*, Wenyang Gao*, Wenjie Dong*, Lin Ai*, Ziwei Gong*, Songfang ... https://arxiv.org/abs/2208.08690, 2024 | | 2024 |
NEUer at SemEval-2021 Task 4: Complete Summary Representation by Filling Answers into Question for Matching Reading Comprehension Z Chen, Y Lei, P Liu, G Guo SemEval 2021, 2021 | | 2021 |