关注
Hongkun Yu
Hongkun Yu
在 google.com 的电子邮件经过验证
标题
引用次数
引用次数
年份
Scaling instruction-finetuned language models
HW Chung, L Hou, S Longpre, B Zoph, Y Tay, W Fedus, Y Li, X Wang, ...
Journal of Machine Learning Research 25 (70), 1-53, 2024
21952024
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ...
arXiv preprint arXiv:2312.11805, 2023
10662023
Mobilebert: a compact task-agnostic bert for resource-limited devices
Z Sun, H Yu, X Song, R Liu, Y Yang, D Zhou
arXiv preprint arXiv:2004.02984, 2020
7322020
Large language models can self-improve
J Huang, SS Gu, L Hou, Y Wu, X Wang, H Yu, J Han
arXiv preprint arXiv:2210.11610, 2022
3202022
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
M Reid, N Savinov, D Teplyashin, D Lepikhin, T Lillicrap, J Alayrac, ...
arXiv preprint arXiv:2403.05530, 2024
1982024
Latent factor transition for dynamic collaborative filtering
C Zhang, K Wang, H Yu, J Sun, EP Lim
Proceedings of the 2014 SIAM international conference on data mining, 452-460, 2014
952014
TensorFlow model garden
H Yu, C Chen, X Du, Y Li, A Rashwan, L Hou, P Jin, F Yang, F Liu, J Kim, ...
Model Garden for TensorFlow., 2020
922020
Huai hsin Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V
HW Chung, L Hou, S Longpre, B Zoph, Y Tay, W Fedus, E Li, X Wang, ...
Le, and Jason Wei, 2022
792022
Generating representative headlines for news stories
X Gu, Y Mao, J Han, J Liu, Y Wu, C Yu, D Finnie, H Yu, J Zhai, N Zukoski
Proceedings of The Web Conference 2020, 1773-1784, 2020
662020
On the transformer growth for progressive bert training
X Gu, L Liu, H Yu, J Li, C Chen, J Han
arXiv preprint arXiv:2010.12562, 2020
492020
Mining multi-aspect reflection of news events in twitter: Discovery, linking and presentation
J Wang, W Tong, H Yu, M Li, X Ma, H Cai, T Hanratty, J Han
2015 IEEE International Conference on Data Mining, 429-438, 2015
402015
Mixture-of-experts meets instruction tuning: A winning combination for large language models
S Shen, L Hou, Y Zhou, N Du, S Longpre, J Wei, HW Chung, B Zoph, ...
arXiv preprint arXiv:2305.14705, 2023
372023
Are features equally representative? A feature-centric recommendation
C Zhang, K Wang, E Lim, Q Xu, J Sun, H Yu
Proceedings of the AAAI Conference on Artificial Intelligence 29 (1), 2015
242015
Flan-moe: Scaling instruction-finetuned language models with sparse mixture of experts
S Shen, L Hou, Y Zhou, N Du, S Longpre, J Wei, HW Chung, B Zoph, ...
arXiv preprint arXiv:2305.14705 2, 2023
222023
Data-driven contextual valence shifter quantification for multi-theme sentiment analysis
H Yu, J Shang, M Hsu, M Castellanos, J Han
Proceedings of the 25th ACM international on conference on information and …, 2016
212016
Mobilebert: Task-agnostic compression of bert by progressive knowledge transfer
Z Sun, H Yu, X Song, R Liu, Y Yang, D Zhou
172019
Enct5: Fine-tuning t5 encoder for non-autoregressive tasks
F Liu, S Shakeri, H Yu, J Li
arXiv preprint arXiv:2110.08426 2, 2021
162021
TensorFlow model garden. 2020
H Yu, C Chen, X Du, Y Li, A Rashwan, L Hou, P Jin, F Yang, F Liu, J Kim, ...
URL https://github. com/tensorflow/models, 2020
152020
EKNOT: Event knowledge from news and opinions in Twitter
M Li, J Wang, W Tong, H Yu, X Ma, Y Chen, H Cai, J Han
Proceedings of the AAAI Conference on Artificial Intelligence 30 (1), 2016
122016
Mobilebert: Task-agnostic compression of bert for resource limited devices
Z Sun, H Yu, X Song, R Liu, Y Yang, D Zhou
ICLR Openreview 13, 50-78, 2020
102020
系统目前无法执行此操作,请稍后再试。
文章 1–20