Proving the lottery ticket hypothesis: Pruning is all you need E Malach, G Yehudai, S Shalev-Schwartz, O Shamir International Conference on Machine Learning, 6682-6691, 2020 | 282 | 2020 |
On the power and limitations of random features for understanding neural networks G Yehudai, O Shamir Advances in Neural Information Processing Systems, 2019 | 210 | 2019 |
From Local Structures to Size Generalization in Graph Neural Networks G Yehudai, E Fetaya, E Meirom, G Chechik, H Maron arXiv preprint arXiv:2010.08853, 2020 | 122 | 2020 |
Reconstructing training data from trained neural networks N Haim, G Vardi, G Yehudai, O Shamir, M Irani Advances in Neural Information Processing Systems 35, 22911-22924, 2022 | 113 | 2022 |
Learning a single neuron with gradient methods G Yehudai, S Ohad Conference on Learning Theory, 3756-3786, 2020 | 77 | 2020 |
The effects of mild over-parameterization on the optimization landscape of shallow relu neural networks IM Safran, G Yehudai, O Shamir Conference on Learning Theory, 3889-3934, 2021 | 40 | 2021 |
Gradient methods provably converge to non-robust networks G Vardi, G Yehudai, O Shamir Advances in Neural Information Processing Systems 35, 20921-20932, 2022 | 26 | 2022 |
Learning a single neuron with bias using gradient descent G Vardi, G Yehudai, O Shamir Advances in Neural Information Processing Systems 34, 28690-28700, 2021 | 21 | 2021 |
The connection between approximation, depth separation and learnability in neural networks E Malach, G Yehudai, S Shalev-Schwartz, O Shamir Conference on Learning Theory, 3265-3295, 2021 | 21 | 2021 |
On the optimal memorization power of relu neural networks G Vardi, G Yehudai, O Shamir arXiv preprint arXiv:2110.03187, 2021 | 20 | 2021 |
Width is less important than depth in relu neural networks G Vardi, G Yehudai, O Shamir Conference on learning theory, 1249-1281, 2022 | 14 | 2022 |
From tempered to benign overfitting in relu neural networks G Kornowski, G Yehudai, O Shamir Advances in Neural Information Processing Systems 36, 2024 | 11 | 2024 |
Generating collection rules based on security rules NA Arbel, L Lazar, G Yehudai US Patent 11,330,016, 2022 | 8 | 2022 |
Deconstructing data reconstruction: Multiclass, weight decay and general losses G Buzaglo, N Haim, G Yehudai, G Vardi, Y Oz, Y Nikankin, M Irani Advances in Neural Information Processing Systems 36, 2024 | 6 | 2024 |
On size generalization in graph neural networks G Yehudai, E Fetaya, E Meirom, G Chechik, H Maron | 6 | 2020 |
Adversarial examples exist in two-layer ReLU networks for low dimensional linear subspaces O Melamed, G Yehudai, G Vardi Advances in Neural Information Processing Systems 36, 2024 | 3* | 2024 |
Aggregating alerts of malicious events for computer security G Yehudai, I Mantin, L Fisch, S Hershkovitz, A Shulman, MR Ambar US Patent 11,218,448, 2022 | 1 | 2022 |
On the Benefits of Rank in Attention Layers N Amsel, G Yehudai, J Bruna arXiv preprint arXiv:2407.16153, 2024 | | 2024 |
Reconstructing Training Data From Real World Models Trained with Transfer Learning Y Oz, G Yehudai, G Vardi, I Antebi, M Irani, N Haim arXiv preprint arXiv:2407.15845, 2024 | | 2024 |
When Can Transformers Count to n? G Yehudai, H Kaplan, A Ghandeharioun, M Geva, A Globerson arXiv preprint arXiv:2407.15160, 2024 | | 2024 |