关注
Armen Aghajanyan
Armen Aghajanyan
Facebook AI Research
在 fb.com 的电子邮件经过验证
标题
引用次数
引用次数
年份
Videoclip: Contrastive pre-training for zero-shot video-text understanding
H Xu, G Ghosh, PY Huang, D Okhonko, A Aghajanyan, F Metze, ...
arXiv preprint arXiv:2109.14084, 2021
4412021
Incoder: A generative model for code infilling and synthesis
D Fried, A Aghajanyan, J Lin, S Wang, E Wallace, F Shi, R Zhong, W Yih, ...
arXiv preprint arXiv:2204.05999, 2022
4312022
Intrinsic dimensionality explains the effectiveness of language model fine-tuning
A Aghajanyan, L Zettlemoyer, S Gupta
arXiv preprint arXiv:2012.13255, 2020
3782020
Muppet: Massive multi-task representations with pre-finetuning
A Aghajanyan, A Gupta, A Shrivastava, X Chen, L Zettlemoyer, S Gupta
arXiv preprint arXiv:2101.11038, 2021
2472021
Better fine-tuning by reducing representational collapse
A Aghajanyan, A Shrivastava, A Gupta, N Goyal, L Zettlemoyer, S Gupta
arXiv preprint arXiv:2008.03156, 2020
2282020
Memorization without overfitting: Analyzing the training dynamics of large language models
K Tirumala, A Markosyan, L Zettlemoyer, A Aghajanyan
Advances in Neural Information Processing Systems 35, 38274-38290, 2022
1602022
Pre-training via paraphrasing
M Lewis, M Ghazvininejad, G Ghosh, A Aghajanyan, S Wang, ...
Advances in Neural Information Processing Systems 33, 18470-18481, 2020
1512020
Cm3: A causal masked multimodal model of the internet
A Aghajanyan, B Huang, C Ross, V Karpukhin, H Xu, N Goyal, D Okhonko, ...
arXiv preprint arXiv:2201.07520, 2022
1302022
Improving passage retrieval with zero-shot question generation
DS Sachan, M Lewis, M Joshi, A Aghajanyan, W Yih, J Pineau, ...
arXiv preprint arXiv:2204.07496, 2022
922022
Scaling autoregressive multi-modal models: Pretraining and instruction tuning
L Yu, B Shi, R Pasunuru, B Muller, O Golovneva, T Wang, A Babu, B Tang, ...
arXiv preprint arXiv:2309.02591 2 (3), 2023
752023
Retrieval-augmented multimodal language modeling
M Yasunaga, A Aghajanyan, W Shi, R James, J Leskovec, P Liang, ...
arXiv preprint arXiv:2211.12561, 2022
752022
Htlm: Hyper-text pre-training and prompting of language models
A Aghajanyan, D Okhonko, M Lewis, M Joshi, H Xu, G Ghosh, ...
arXiv preprint arXiv:2107.06955, 2021
652021
Megabyte: Predicting million-byte sequences with multiscale transformers
L Yu, D Simig, C Flaherty, A Aghajanyan, L Zettlemoyer, M Lewis
Advances in Neural Information Processing Systems 36, 2024
542024
Scaling laws for generative mixed-modal language models
A Aghajanyan, L Yu, A Conneau, WN Hsu, K Hambardzumyan, S Zhang, ...
International Conference on Machine Learning, 265-279, 2023
522023
Conversational semantic parsing
A Aghajanyan, J Maillard, A Shrivastava, K Diedrick, M Haeger, H Li, ...
arXiv preprint arXiv:2009.13655, 2020
502020
D4: Improving llm pretraining via document de-duplication and diversification
K Tirumala, D Simig, A Aghajanyan, A Morcos
Advances in Neural Information Processing Systems 36, 2024
372024
Semantic representations using structural ontology for assistant systems
A Aghajanyan, S Gupta, B Moran, TF Levin, CANSH Nakatsu, D Difranco, ...
US Patent 11,688,022, 2023
352023
On-device convolutional neural network models for assistant systems
A Aly, A Babu, A Aghajanyan
US Patent 11,314,941, 2022
352022
Non-autoregressive semantic parsing for compositional task-oriented dialog
A Babu, A Shrivastava, A Aghajanyan, A Aly, A Fan, M Ghazvininejad
arXiv preprint arXiv:2104.04923, 2021
242021
Softtarget regularization: An effective technique to reduce over-fitting in neural networks
A Aghajanyan
2017 3rd IEEE International Conference on Cybernetics (CYBCONF), 1-5, 2017
202017
系统目前无法执行此操作,请稍后再试。
文章 1–20