关注
Winnie Xu
Winnie Xu
Contextual AI, Stanford University
在 cs.toronto.edu 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Multi-Game Decision Transformers
KH Lee, O Nachum, M Yang, L Lee, D Freeman, W Xu, S Guadarrama, ...
Neural Information Processing Systems (NeurIPS) 2022, 2022
1882022
Prioritized training on points that are learnable, worth learning, and not yet learned
S Mindermann, M Razzak, W Xu, A Kirsch, M Sharma, A Morisot, ...
International Conference on Machine Learning - Subset Selection Workshop, 2021
972021
Kto: Model alignment as prospect theoretic optimization
K Ethayarajh, W Xu, N Muennighoff, D Jurafsky, D Kiela
arXiv preprint arXiv:2402.01306, 2024
902024
Language model cascades
D Dohan, W Xu, A Lewkowycz, J Austin, D Bieber, RG Lopes, Y Wu, ...
ICML Beyond Bayes: Paths Towards Universal Reasoning Systems, 2022
692022
Infinitely Deep Bayesian Neural Networks with Stochastic Differential Equations
W Xu, RTQ Chen, X Li, D Duvenaud
International Conference on Artificial Intelligence and Statistics, 2021
492021
Noisy Feature Mixup
SH Lim, NB Erichson, F Utrera, W Xu, MW Mahoney
International Conference on Learning Representations, 2021
332021
Deep latent state space models for time-series generation
L Zhou, M Poli, W Xu, S Massaroli, S Ermon
International Conference on Machine Learning, 42625-42643, 2023
212023
Noisymix: Boosting robustness by combining data augmentations, stability training, and noise injections
NB Erichson, SH Lim, F Utrera, W Xu, Z Cao, MW Mahoney
arXiv preprint arXiv:2202.01263 1, 2022
192022
Neural functional transformers
A Zhou, K Yang, Y Jiang, K Burns, W Xu, S Sokota, JZ Kolter, C Finn
Advances in neural information processing systems 36, 2024
162024
Human-centered loss functions (halos)
K Ethayarajh, W Xu, D Jurafsky, D Kiela
Technical report, Contextual AI, 2023
122023
NoisyMix: Boosting model robustness to common corruptions
B Erichson, SH Lim, W Xu, F Utrera, Z Cao, M Mahoney
International Conference on Artificial Intelligence and Statistics, 4033-4041, 2024
82024
Language model cascades, 2022
D Dohan, W Xu, A Lewkowycz, J Austin, D Bieber, RG Lopes, Y Wu, ...
URL https://arxiv. org/abs/2207.10342, 2022
72022
Self-Similarity Priors: Neural Collages as Differentiable Fractal Representations
M Poli, W Xu, S Massaroli, C Meng, K Kim, S Ermon
Neural Information Processing Systems (NeurIPS) 2022, 2022
32022
Model Alignment as Prospect Theoretic Optimization
K Ethayarajh, W Xu, N Muennighoff, D Jurafsky, D Kiela
Forty-first International Conference on Machine Learning, 0
1
Prioritized training on points that are learnable, worth learning, and not yet learned (workshop version)
S Mindermann, M Razzak, W Xu, A Kirsch, M Sharma, A Morisot, ...
arXiv preprint arXiv:2107.02565, 2021
2021
Continuous-Depth Bayesian Neural Networks
W Xu, RTQ Chen, X Li, D Duvenaud
International Conference on Machine Learning - Uncertainty in Deep Learning …, 2020
2020
Revisiting Associative Compression: I Can’t Believe It’s Not Better
W Xu, MJ Muckley, Y Dubois, K Ullrich
系统目前无法执行此操作,请稍后再试。
文章 1–17