关注
Long Phan
Long Phan
Center for AI Safety
在 safe.ai 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Bloom: A 176b-parameter open-access multilingual language model
T Le Scao, A Fan, C Akiki, E Pavlick, S Ilić, D Hesslow, R Castagné, ...
13062023
Scifive: a text-to-text transformer model for biomedical literature
LN Phan, JT Anibal, H Tran, S Chanana, E Bahadroglu, A Peltekian, ...
arXiv preprint arXiv:2106.03598, 2021
1262021
CoTexT: Multi-task Learning with Code-Text Transformer
L Phan, H Tran, D Le, H Nguyen, J Anibal, A Peltekian, Y Ye
ACL NLP4Prog, 2021
1152021
Representation engineering: A top-down approach to ai transparency
A Zou, L Phan, S Chen, J Campbell, P Guo, R Ren, A Pan, X Yin, ...
arXiv preprint arXiv:2310.01405, 2023
1142023
ViT5: Pretrained Text-to-Text Transformer for Vietnamese Language Generation
L Phan, H Tran, H Nguyen, TH Trinh
NAACL SRW 2022, 2022
482022
Harmbench: A standardized evaluation framework for automated red teaming and robust refusal
M Mazeika, L Phan, X Yin, A Zou, Z Wang, N Mu, E Sakhaee, N Li, ...
arXiv preprint arXiv:2402.04249, 2024
362024
The wmdp benchmark: Measuring and reducing malicious use with unlearning
N Li, A Pan, A Gopal, S Yue, D Berrios, A Gatti, JD Li, AK Dombrowski, ...
arXiv preprint arXiv:2403.03218, 2024
192024
Hierarchical transformer encoders for Vietnamese spelling correction
H Tran, CV Dinh, L Phan, ST Nguyen
Advances and Trends in Artificial Intelligence. Artificial Intelligence …, 2021
92021
SPBERT: an efficient pre-training BERT on SPARQL queries for question answering over knowledge graphs
H Tran, L Phan, J Anibal, BT Nguyen, TS Nguyen
Neural Information Processing: 28th International Conference, ICONIP 2021 …, 2021
72021
Mtet: Multi-domain translation for english and vietnamese
C Ngo, TH Trinh, L Phan, H Tran, T Dang, H Nguyen, M Nguyen, ...
arXiv preprint arXiv:2210.05610, 2022
62022
Enriching biomedical knowledge for low-resource language through large-scale translation
L Phan, T Dang, H Tran, TH Trinh, V Phan, LD Chau, MT Luong
arXiv preprint arXiv:2210.05598, 2022
52022
Viesum: how robust are transformer-based models on Vietnamese summarization?
H Nguyen, L Phan, J Anibal, A Peltekian, H Tran
arXiv preprint arXiv:2110.04257, 2021
52021
HAL-X: Scalable hierarchical clustering for rapid and tunable single-cell analysis
J Anibal, AG Day, E Bahadiroglu, L O’Neil, L Phan, A Peltekian, A Erez, ...
PLoS Computational Biology 18 (10), e1010349, 2022
4*2022
Improving Alignment and Robustness with Short Circuiting
A Zou, L Phan, J Wang, D Duenas, M Lin, M Andriushchenko, R Wang, ...
arXiv preprint arXiv:2406.04313, 2024
22024
系统目前无法执行此操作,请稍后再试。
文章 1–14