关注
Ziqing Yang
Ziqing Yang
iFLYTEK Research
在 iflytek.com 的电子邮件经过验证
标题
引用次数
引用次数
年份
Pre-training with whole word masking for chinese bert
Y Cui, W Che, T Liu, B Qin, Z Yang
IEEE/ACM Transactions on Audio, Speech, and Language Processing 29, 3504-3514, 2021
13142021
Efficient and effective text encoding for chinese llama and alpaca
Y Cui, Z Yang, X Yao
arXiv preprint arXiv:2304.08177, 2023
1562023
PERT: pre-training BERT with permuted language model
Y Cui, Z Yang, T Liu
arXiv preprint arXiv:2203.06906, 2022
432022
Textbrewer: An open-source knowledge distillation toolkit for natural language processing
Z Yang, Y Cui, Z Chen, W Che, T Liu, S Wang, G Hu
arXiv preprint arXiv:2002.12620, 2020
402020
CINO: A Chinese minority pre-trained language model
Z Yang, Z Xu, Y Cui, B Wang, M Lin, D Wu, Z Chen
arXiv preprint arXiv:2202.13558, 2022
302022
Benchmarking robustness of machine reading comprehension models
C Si, Z Yang, Y Cui, W Ma, T Liu, S Wang
arXiv preprint arXiv:2004.14004, 2020
292020
Pre-training with whole word masking for chinese bert. arXiv 2019
Y Cui, W Che, T Liu, B Qin, Z Yang, S Wang, G Hu
arXiv preprint arXiv:1906.08101, 0
22
Improving machine reading comprehension via adversarial training
Z Yang, Y Cui, W Che, T Liu, S Wang, G Hu
arXiv preprint arXiv:1911.03614, 2019
212019
On the evaporation of solar dark matter: spin-independent effective operators
ZL Liang, YL Wu, ZQ Yang, YF Zhou
Journal of Cosmology and Astroparticle Physics 2016 (09), 018, 2016
212016
A sentence cloze dataset for Chinese machine reading comprehension
Y Cui, T Liu, Z Yang, Z Chen, W Ma, W Che, S Wang, G Hu
arXiv preprint arXiv:2004.03116, 2020
162020
Critical behaviors and universality classes of percolation phase transitions on two-dimensional square lattice
Y Zhu, ZQ Yang, X Zhang, XS Chen
Communications in Theoretical Physics 64 (2), 231, 2015
112015
The leptophilic dark matter in the Sun: the minimum testable mass
ZL Liang, YL Tang, ZQ Yang
Journal of Cosmology and Astroparticle Physics 2018 (10), 035, 2018
92018
Textpruner: A model pruning toolkit for pre-trained language models
Z Yang, Y Cui, Z Chen
arXiv preprint arXiv:2203.15996, 2022
82022
Gradient-based intra-attention pruning on pre-trained language models
Z Yang, Y Cui, X Yao, S Wang
arXiv preprint arXiv:2212.07634, 2022
72022
HFL at SemEval-2022 task 8: A linguistics-inspired regression model with data augmentation for multilingual news similarity
Z Xu, Z Yang, Y Cui, Z Chen
arXiv preprint arXiv:2204.04844, 2022
62022
Cross-lingual text classification with multilingual distillation and zero-shot-aware training
Z Yang, Y Cui, Z Chen, S Wang
arXiv preprint arXiv:2202.13654, 2022
52022
Adversarial training for machine reading comprehension with virtual embeddings
Z Yang, Y Cui, C Si, W Che, T Liu, S Wang, G Hu
arXiv preprint arXiv:2106.04437, 2021
52021
Pre-Training with Whole Word Masking for Chinese BERT. arXiv e-prints, art
Y Cui, W Che, T Liu, B Qin, Z Yang, S Wang, G Hu
arXiv preprint arXiv:1906.08101, 2019
52019
IDOL: indicator-oriented logic pre-training for logical reasoning
Z Xu, Z Yang, Y Cui, S Wang
arXiv preprint arXiv:2306.15273, 2023
42023
Interactive gated decoder for machine reading comprehension
Y Cui, W Che, Z Yang, T Liu, B Qin, S Wang, G Hu
Transactions on Asian and Low-resource Language Information Processing 21 (4 …, 2022
42022
系统目前无法执行此操作,请稍后再试。
文章 1–20