Holistic evaluation of language models P Liang, R Bommasani, T Lee, D Tsipras, D Soylu, M Yasunaga, Y Zhang, ... Transactions on Machine Learning Research, 2022 | 802 | 2022 |
STaR: Bootstrapping reasoning with reasoning E Zelikman, Y Wu, J Mu, ND Goodman NeurIPS 2022, 2022 | 331 | 2022 |
Parsel🐍: Algorithmic Reasoning with Language Models by Composing Decompositions E Zelikman, Q Huang, G Poesia, N Goodman, N Haber Advances in Neural Information Processing Systems 36, 31466-31523, 2023 | 55* | 2023 |
Hypothesis search: Inductive reasoning with language models R Wang, E Zelikman, G Poesia, Y Pu, N Haber, ND Goodman ICLR 2024, 2023 | 34 | 2023 |
Context Matters for Image Descriptions for Accessibility: Challenges for Referenceless Evaluation Metrics E Kreiss, C Bennett, S Hooshmand, E Zelikman, MR Morris, C Potts EMNLP 2022, 2022 | 25 | 2022 |
Evaluating the disentanglement of deep generative models through manifold topology S Zhou, E Zelikman, F Lu, AY Ng, G Carlsson, S Ermon ICLR 2021, 2020 | 24 | 2020 |
Short-Term Solar Irradiance Forecasting Using Calibrated Probabilistic Models E Zelikman, S Zhou, J Irvin, C Raterink, H Sheng, J Kelly, R Rajagopal, ... NeurIPS 2020 Workshop on Tackling Climate Change with Machine Learning, 2020 | 21 | 2020 |
CRUDE: Calibrating Regression Uncertainty Distributions Empirically E Zelikman, C Healy, S Zhou, A Avati ICML 2020 Workshop on Uncertainty & Robustness in Deep Learning, 2020 | 17* | 2020 |
Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation E Zelikman, E Lorch, L Mackey, AT Kalai COLM 2024, 2023 | 16 | 2023 |
Just one byte (per gradient): A note on low-bandwidth decentralized language model finetuning using shared randomness E Zelikman, Q Huang, P Liang, N Haber, ND Goodman arXiv preprint arXiv:2306.10015, 2023 | 8 | 2023 |
Quiet-STaR: Language models can teach themselves to think before speaking E Zelikman, G Harik, Y Shao, V Jayasiri, N Haber, ND Goodman COLM, 2024 | 6 | 2024 |
Generating and Evaluating Tests for K-12 Students with Language Model Simulations: A Case Study on Sentence Reading Efficiency E Zelikman, WA Ma, JE Tran, D Yang, JD Yeatman, N Haber EMNLP 2023, 2023 | 5 | 2023 |
SkyGPT: Probabilistic Ultra-short-term Solar Forecasting Using Synthetic Sky Images from Physics-constrained VideoGPT Y Nie, E Zelikman, A Scott, Q Paletta, A Brandt Advances in Applied Energy, 100172, 2023 | 5* | 2023 |
Contextual Salience for Fast and Accurate Sentence Vectors E Zelikman, R Socher arXiv preprint arXiv:1803.08493, 2018 | 4* | 2018 |
Self-supervised alignment with mutual information: Learning to follow principles without preference labels JP Fränken, E Zelikman, R Rafailov, K Gandhi, T Gerstenberg, ... arXiv preprint arXiv:2404.14313, 2024 | 3 | 2024 |
Certified deductive reasoning with language models G Poesia, K Gandhi, E Zelikman, ND Goodman Transactions on Machine Learning Research, 2023 | 2 | 2023 |
Specialized program: Generative adversarial networks (GANS) S Zhou, E Zhou, E Zelikman | 2 | 2020 |
ContextRef: Evaluating Referenceless Metrics For Image Description Generation E Kreiss, E Zelikman, C Potts, N Haber ICLR 2024, 2023 | 1 | 2023 |
Lexinvariant Language Models Q Huang, E Zelikman, SL Chen, Y Wu, G Valiant, P Liang NeurIPS 2023 (Spotlight), 2023 | | 2023 |
Learning is its Own Reward: Exploring Worlds with Curiosity-driven Spiking Neural Networks E Zelikman Undergraduate Honors Thesis, Stanford University (purl.stanford.edu/pb563ty3328), 2020 | | 2020 |