关注
Adam Roberts
Adam Roberts
Google DeepMind
在 google.com 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Exploring the limits of transfer learning with a unified text-to-text transformer
C Raffel*, N Shazeer*, A Roberts*, K Lee*, S Narang, M Matena, Y Zhou, ...
arXiv preprint arXiv:1910.10683, 2019
159822019
Differential gene and transcript expression analysis of RNA-seq experiments with TopHat and Cufflinks
C Trapnell, A Roberts, L Goff, G Pertea, D Kim, DR Kelley, H Pimentel, ...
Nature Protocols 7 (3), 562-578, 2012
128802012
Palm: Scaling language modeling with pathways
A Chowdhery, S Narang, J Devlin, M Bosma, G Mishra, A Roberts, ...
Journal of Machine Learning Research 24 (240), 1-113, 2023
38232023
Scaling instruction-finetuned language models
HW Chung, L Hou, S Longpre, B Zoph, Y Tay, W Fedus, Y Li, X Wang, ...
Journal of Machine Learning Research 25 (70), 1-53, 2024
19682024
mT5: A massively multilingual pre-trained text-to-text transformer
L Xue*, N Constant*, A Roberts*, M Kale, R Al-Rfou, A Siddhant, A Barua, ...
arXiv preprint arXiv:2010.11934, 2020
19102020
Improving RNA-Seq expression estimates by correcting for fragment bias
A Roberts, C Trapnell, J Donaghey, JL Rinn, L Pachter
Genome biology 12, 1-14, 2011
15182011
Extracting training data from large language models
N Carlini, F Tramer, E Wallace, M Jagielski, A Herbert-Voss, K Lee, ...
30th USENIX Security Symposium (USENIX Security 21), 2633-2650, 2021
13612021
Bloom: A 176b-parameter open-access multilingual language model
T Le Scao, A Fan, C Akiki, E Pavlick, S Ilić, D Hesslow, R Castagné, ...
12802023
Lamda: Language models for dialog applications
R Thoppilan, D De Freitas, J Hall, N Shazeer, A Kulshreshtha, HT Cheng, ...
arXiv preprint arXiv:2201.08239, 2022
12172022
Identification of novel transcripts in annotated genomes using RNA-Seq
A Roberts, H Pimentel, C Trapnell, L Pachter
Bioinformatics 27 (17), 2325-2329, 2011
12092011
Streaming fragment assignment for real-time analysis of sequencing experiments
A Roberts, L Pachter
Nature Methods, 2012
10782012
How much knowledge can you pack into the parameters of a language model?
A Roberts*, C Raffel*, N Shazeer
arXiv preprint arXiv:2002.08910, 2020
7242020
Neural audio synthesis of musical notes with wavenet autoencoders
J Engel, C Resnick, A Roberts, S Dieleman, M Norouzi, D Eck, ...
International Conference on Machine Learning, 1068-1077, 2017
7132017
A hierarchical latent vector model for learning long-term structure in music
A Roberts, J Engel, C Raffel, C Hawthorne, D Eck
International conference on machine learning, 4364-4373, 2018
5562018
Gansynth: Adversarial neural audio synthesis
J Engel, KK Agrawal, S Chen, I Gulrajani, C Donahue, A Roberts
arXiv preprint arXiv:1902.08710, 2019
5162019
Enabling factorized piano music modeling and generation with the MAESTRO dataset
C Hawthorne, A Stasyuk, A Roberts, I Simon, CZA Huang, S Dieleman, ...
arXiv preprint arXiv:1810.12247, 2018
4822018
Crosslingual generalization through multitask finetuning
N Muennighoff, T Wang, L Sutawika, A Roberts, S Biderman, TL Scao, ...
arXiv preprint arXiv:2211.01786, 2022
4322022
DDSP: Differentiable digital signal processing
J Engel, L Hantrakul, C Gu, A Roberts
arXiv preprint arXiv:2001.04643, 2020
4312020
Fast statistical alignment
RK Bradley, A Roberts, M Smoot, S Juvekar, J Do, C Dewey, I Holmes, ...
PLoS computational biology 5 (5), e1000392, 2009
4102009
The flan collection: Designing data and methods for effective instruction tuning
S Longpre, L Hou, T Vu, A Webson, HW Chung, Y Tay, D Zhou, QV Le, ...
International Conference on Machine Learning, 22631-22648, 2023
3912023
系统目前无法执行此操作,请稍后再试。
文章 1–20