关注
Zhen Wan
Zhen Wan
1st year Ph.D. student, Kyoto University
在 nlp.ist.i.kyoto-u.ac.jp 的电子邮件经过验证
标题
引用次数
引用次数
年份
Gpt-re: In-context learning for relation extraction using large language models
Z Wan, F Cheng, Z Mao, Q Liu, H Song, J Li, S Kurohashi
EMNLP 2023, 2023
742023
Pushing the limits of chatgpt on nlp tasks
X Sun, L Dong, X Li, Z Wan, S Wang, T Zhang, J Li, F Cheng, L Lyu, F Wu, ...
arXiv preprint arXiv:2306.09719, 2023
192023
Reformulating Domain Adaptation of Large Language Models as Adapt-Retrieve-Revise
Z Wan, Y Zhang, Y Wang, F Cheng, S Kurohashi
arXiv preprint arXiv:2310.03328, 2023
62023
Seeking Diverse Reasoning Logic: Controlled Equation Expression Generation for Solving Math Word Problems
Y Shen, Q Liu, Z Mao, Z Wan, F Cheng, S Kurohashi
AACL 2022, 2022
62022
Relation Extraction with Weighted Contrastive Pre-training on Distant Supervision
Z Wan, F Cheng, Q Liu, Z Mao, H Song, S Kurohashi
EACL 2023, 2022
52022
When do Contrastive Word Alignments Improve Many-to-many Neural Machine Translation?
Z Mao, C Chu, R Dabre, H Song, Z Wan, S Kurohashi
NAACL 2022, 2022
52022
Rescue Implicit and Long-tail Cases: Nearest Neighbor Relation Extraction
Z Wan, Q Liu, Z Mao, F Cheng, S Kurohashi, J Li
EMNLP 2022, 2022
42022
Rapidly Developing High-quality Instruction Data and Evaluation Benchmark for Large Language Models with Minimal Human Effort: A Case Study on Japanese
Y Sun, Z Wan, N Ueda, S Yahata, F Cheng, C Chu, S Kurohashi
LREC-Coling 2024, 2024
22024
Evaluating Saliency Explanations in NLP by Crowdsourcing
X Lu, J Li, Z Wan, X Lin, K Takeuchi, H Kashima
arXiv preprint arXiv:2405.10767, 2024
2024
系统目前无法执行此操作,请稍后再试。
文章 1–9