关注
Weijia Shi
Weijia Shi
在 uw.edu 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
REPLUG: Retrieval-augmented black-box language models
S Weijia, M Sewon, Y Michihiro, S Minjoon, J Rich, L Mike, Y Wen-tau
ArXiv: 2301.12652, 2023
374*2023
One embedder, any task: Instruction-finetuned text embeddings
H Su, W Shi, J Kasai, Y Wang, Y Hu, M Ostendorf, W Yih, NA Smith, ...
arXiv preprint arXiv:2212.09741, 2022
1882022
Fine-grained human feedback gives better rewards for language model training
Z Wu, Y Hu, W Shi, N Dziri, A Suhr, P Ammanabrolu, NA Smith, ...
Advances in Neural Information Processing Systems 36, 2024
178*2024
Selective annotation makes language models better few-shot learners
H Su, J Kasai, CH Wu, W Shi, T Wang, J Xin, R Zhang, M Ostendorf, ...
arXiv preprint arXiv:2209.01975, 2022
171*2022
Embedding uncertain knowledge graphs
X Chen, M Chen, W Shi, Y Sun, C Zaniolo
Proceedings of the AAAI conference on artificial intelligence 33 (01), 3363-3370, 2019
1462019
Examining gender bias in languages with grammatical gender
P Zhou, W Shi, J Zhao, KH Huang, M Chen, R Cotterell, KW Chang
arXiv preprint arXiv:1909.02224, 2019
136*2019
Detecting pretraining data from large language models
W Shi, A Ajith, M Xia, Y Huang, D Liu, T Blevins, D Chen, L Zettlemoyer
arXiv preprint arXiv:2310.16789, 2023
1242023
Promptcap: Prompt-guided task-aware image captioning
Y Hu, H Hua, Z Yang, W Shi, NA Smith, J Luo
arXiv preprint arXiv:2211.09699, 2022
99*2022
On tractable representations of binary neural networks
W Shi, A Shih, A Darwiche, A Choi
arXiv preprint arXiv:2004.02082, 2020
94*2020
Retrieval-augmented multimodal language modeling
M Yasunaga, A Aghajanyan, W Shi, R James, J Leskovec, P Liang, ...
arXiv preprint arXiv:2211.12561, 2022
892022
Trusting your evidence: Hallucinate less with context-aware decoding
W Shi, X Han, M Lewis, Y Tsvetkov, L Zettlemoyer, SW Yih
arXiv preprint arXiv:2305.14739, 2023
792023
Retrofitting contextualized word embeddings with paraphrases
W Shi, M Chen, P Zhou, KW Chang
arXiv preprint arXiv:1909.09700, 2019
73*2019
RECOMP: Improving retrieval-augmented LMs with context compression and selective augmentation
F Xu, W Shi, E Choi
The Twelfth International Conference on Learning Representations, 2024
67*2024
Ra-dit: Retrieval-augmented dual instruction tuning
XV Lin, X Chen, M Chen, W Shi, M Lomeli, R James, P Rodriguez, J Kahn, ...
arXiv preprint arXiv:2310.01352, 2023
642023
kNN-Prompt: Nearest Neighbor Zero-Shot Inference
W Shi, J Michael, S Gururangan, L Zettlemoyer
arXiv preprint arXiv:2205.13792, 2022
57*2022
Nonparametric masked language modeling
S Min, W Shi, M Lewis, X Chen, W Yih, H Hajishirzi, L Zettlemoyer
arXiv preprint arXiv:2212.01349, 2022
562022
Silo language models: Isolating legal risk in a nonparametric datastore
S Min, S Gururangan, E Wallace, W Shi, H Hajishirzi, NA Smith, ...
arXiv preprint arXiv:2308.04430, 2023
422023
Scaling expert language models with unsupervised domain discovery
S Gururangan, M Li, M Lewis, W Shi, T Althoff, NA Smith, L Zettlemoyer
arXiv preprint arXiv:2303.14177, 2023
39*2023
Do membership inference attacks work on large language models?
M Duan, A Suri, N Mireshghallah, S Min, W Shi, L Zettlemoyer, Y Tsvetkov, ...
arXiv preprint arXiv:2402.07841, 2024
33*2024
In-context pretraining: Language modeling beyond document boundaries
W Shi, S Min, M Lomeli, C Zhou, M Li, V Lin, NA Smith, L Zettlemoyer, ...
arXiv preprint arXiv:2310.10638, 2023
33*2023
系统目前无法执行此操作,请稍后再试。
文章 1–20