关注
Eric Zelikman
Eric Zelikman
在 cs.stanford.edu 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Holistic evaluation of language models
P Liang, R Bommasani, T Lee, D Tsipras, D Soylu, M Yasunaga, Y Zhang, ...
Transactions on Machine Learning Research, 2022
8022022
STaR: Bootstrapping reasoning with reasoning
E Zelikman, Y Wu, J Mu, ND Goodman
NeurIPS 2022, 2022
3312022
Parsel🐍: Algorithmic Reasoning with Language Models by Composing Decompositions
E Zelikman, Q Huang, G Poesia, N Goodman, N Haber
Advances in Neural Information Processing Systems 36, 31466-31523, 2023
55*2023
Hypothesis search: Inductive reasoning with language models
R Wang, E Zelikman, G Poesia, Y Pu, N Haber, ND Goodman
ICLR 2024, 2023
342023
Context Matters for Image Descriptions for Accessibility: Challenges for Referenceless Evaluation Metrics
E Kreiss, C Bennett, S Hooshmand, E Zelikman, MR Morris, C Potts
EMNLP 2022, 2022
252022
Evaluating the disentanglement of deep generative models through manifold topology
S Zhou, E Zelikman, F Lu, AY Ng, G Carlsson, S Ermon
ICLR 2021, 2020
242020
Short-Term Solar Irradiance Forecasting Using Calibrated Probabilistic Models
E Zelikman, S Zhou, J Irvin, C Raterink, H Sheng, J Kelly, R Rajagopal, ...
NeurIPS 2020 Workshop on Tackling Climate Change with Machine Learning, 2020
212020
CRUDE: Calibrating Regression Uncertainty Distributions Empirically
E Zelikman, C Healy, S Zhou, A Avati
ICML 2020 Workshop on Uncertainty & Robustness in Deep Learning, 2020
17*2020
Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation
E Zelikman, E Lorch, L Mackey, AT Kalai
COLM 2024, 2023
162023
Just one byte (per gradient): A note on low-bandwidth decentralized language model finetuning using shared randomness
E Zelikman, Q Huang, P Liang, N Haber, ND Goodman
arXiv preprint arXiv:2306.10015, 2023
82023
Quiet-STaR: Language models can teach themselves to think before speaking
E Zelikman, G Harik, Y Shao, V Jayasiri, N Haber, ND Goodman
COLM, 2024
62024
Generating and Evaluating Tests for K-12 Students with Language Model Simulations: A Case Study on Sentence Reading Efficiency
E Zelikman, WA Ma, JE Tran, D Yang, JD Yeatman, N Haber
EMNLP 2023, 2023
52023
SkyGPT: Probabilistic Ultra-short-term Solar Forecasting Using Synthetic Sky Images from Physics-constrained VideoGPT
Y Nie, E Zelikman, A Scott, Q Paletta, A Brandt
Advances in Applied Energy, 100172, 2023
5*2023
Contextual Salience for Fast and Accurate Sentence Vectors
E Zelikman, R Socher
arXiv preprint arXiv:1803.08493, 2018
4*2018
Self-supervised alignment with mutual information: Learning to follow principles without preference labels
JP Fränken, E Zelikman, R Rafailov, K Gandhi, T Gerstenberg, ...
arXiv preprint arXiv:2404.14313, 2024
32024
Certified deductive reasoning with language models
G Poesia, K Gandhi, E Zelikman, ND Goodman
Transactions on Machine Learning Research, 2023
22023
Specialized program: Generative adversarial networks (GANS)
S Zhou, E Zhou, E Zelikman
22020
ContextRef: Evaluating Referenceless Metrics For Image Description Generation
E Kreiss, E Zelikman, C Potts, N Haber
ICLR 2024, 2023
12023
Lexinvariant Language Models
Q Huang, E Zelikman, SL Chen, Y Wu, G Valiant, P Liang
NeurIPS 2023 (Spotlight), 2023
2023
Learning is its Own Reward: Exploring Worlds with Curiosity-driven Spiking Neural Networks
E Zelikman
Undergraduate Honors Thesis, Stanford University (purl.stanford.edu/pb563ty3328), 2020
2020
系统目前无法执行此操作,请稍后再试。
文章 1–20