关注
Ben Peters
Ben Peters
Instituto de Telecomunicações
在 uw.edu 的电子邮件经过验证
标题
引用次数
引用次数
年份
Sparse sequence-to-sequence models
B Peters, V Niculae, AFT Martins
arXiv preprint arXiv:1905.05702, 2019
2262019
Massively multilingual neural grapheme-to-phoneme conversion
B Peters, J Dehdari, J van Genabith
arXiv preprint arXiv:1708.01464, 2017
542017
Smoothing and shrinking the sparse Seq2Seq search space
B Peters, AFT Martins
arXiv preprint arXiv:2103.10291, 2021
192021
One-size-fits-all multilingual models
B Peters, AFT Martins
Proceedings of the 17th SIGMORPHON Workshop on Computational Research in …, 2020
192020
Interpretable structure induction via sparse attention
B Peters, V Niculae, AFT Martins
Proceedings of the 2018 EMNLP workshop blackboxnlp: analyzing and …, 2018
152018
Tower: An open multilingual large language model for translation-related tasks
DM Alves, J Pombal, NM Guerreiro, PH Martins, J Alves, A Farajian, ...
arXiv preprint arXiv:2402.17733, 2024
122024
It–ist at the sigmorphon 2019 shared task: Sparse two-headed models for inflection
B Peters, AFT Martins
Proceedings of the 16th Workshop on Computational Research in Phonetics …, 2019
112019
Beyond characters: Subword-level morpheme segmentation
B Peters, AFT Martins
Proceedings of the 19th SIGMORPHON Workshop on Computational Research in …, 2022
102022
Deepspin: Deep structured prediction for natural language processing
AFT Martins, B Peters, C Zerva, C Lyu, G Correia, M Treviso, P Martins, ...
Proceedings of the 23rd annual conference of the european association for …, 2022
22022
TOWER: An Open Multilingual Large Language Model for Translation-Related Tasks
P Colombo, D Alves, J Pombal, N Guerreiro, P Martins, J Alves, A Farajian, ...
2024
Did Translation Models Get More Robust Without Anyone Even Noticing?
B Peters, AFT Martins
arXiv preprint arXiv:2403.03923, 2024
2024
系统目前无法执行此操作,请稍后再试。
文章 1–11