关注
Gilad Yehudai
Gilad Yehudai
Postdoctoral Associate, New York University
在 weizmann.ac.il 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Proving the lottery ticket hypothesis: Pruning is all you need
E Malach, G Yehudai, S Shalev-Schwartz, O Shamir
International Conference on Machine Learning, 6682-6691, 2020
2822020
On the power and limitations of random features for understanding neural networks
G Yehudai, O Shamir
Advances in Neural Information Processing Systems, 2019
2102019
From Local Structures to Size Generalization in Graph Neural Networks
G Yehudai, E Fetaya, E Meirom, G Chechik, H Maron
arXiv preprint arXiv:2010.08853, 2020
1222020
Reconstructing training data from trained neural networks
N Haim, G Vardi, G Yehudai, O Shamir, M Irani
Advances in Neural Information Processing Systems 35, 22911-22924, 2022
1132022
Learning a single neuron with gradient methods
G Yehudai, S Ohad
Conference on Learning Theory, 3756-3786, 2020
772020
The effects of mild over-parameterization on the optimization landscape of shallow relu neural networks
IM Safran, G Yehudai, O Shamir
Conference on Learning Theory, 3889-3934, 2021
402021
Gradient methods provably converge to non-robust networks
G Vardi, G Yehudai, O Shamir
Advances in Neural Information Processing Systems 35, 20921-20932, 2022
262022
Learning a single neuron with bias using gradient descent
G Vardi, G Yehudai, O Shamir
Advances in Neural Information Processing Systems 34, 28690-28700, 2021
212021
The connection between approximation, depth separation and learnability in neural networks
E Malach, G Yehudai, S Shalev-Schwartz, O Shamir
Conference on Learning Theory, 3265-3295, 2021
212021
On the optimal memorization power of relu neural networks
G Vardi, G Yehudai, O Shamir
arXiv preprint arXiv:2110.03187, 2021
202021
Width is less important than depth in relu neural networks
G Vardi, G Yehudai, O Shamir
Conference on learning theory, 1249-1281, 2022
142022
From tempered to benign overfitting in relu neural networks
G Kornowski, G Yehudai, O Shamir
Advances in Neural Information Processing Systems 36, 2024
112024
Generating collection rules based on security rules
NA Arbel, L Lazar, G Yehudai
US Patent 11,330,016, 2022
82022
Deconstructing data reconstruction: Multiclass, weight decay and general losses
G Buzaglo, N Haim, G Yehudai, G Vardi, Y Oz, Y Nikankin, M Irani
Advances in Neural Information Processing Systems 36, 2024
62024
On size generalization in graph neural networks
G Yehudai, E Fetaya, E Meirom, G Chechik, H Maron
62020
Adversarial examples exist in two-layer ReLU networks for low dimensional linear subspaces
O Melamed, G Yehudai, G Vardi
Advances in Neural Information Processing Systems 36, 2024
3*2024
Aggregating alerts of malicious events for computer security
G Yehudai, I Mantin, L Fisch, S Hershkovitz, A Shulman, MR Ambar
US Patent 11,218,448, 2022
12022
On the Benefits of Rank in Attention Layers
N Amsel, G Yehudai, J Bruna
arXiv preprint arXiv:2407.16153, 2024
2024
Reconstructing Training Data From Real World Models Trained with Transfer Learning
Y Oz, G Yehudai, G Vardi, I Antebi, M Irani, N Haim
arXiv preprint arXiv:2407.15845, 2024
2024
When Can Transformers Count to n?
G Yehudai, H Kaplan, A Ghandeharioun, M Geva, A Globerson
arXiv preprint arXiv:2407.15160, 2024
2024
系统目前无法执行此操作,请稍后再试。
文章 1–20