Lipschitz constrained networks have gathered considerable attention in the deep learning community, with usages ranging from Wasserstein distance estimation to the training of …
H Min, R Vidal - arXiv preprint arXiv:2405.15942, 2024 - arxiv.org
The implicit bias of gradient-based training algorithms has been considered mostly beneficial as it leads to trained networks that often generalize well. However, Frei et …
We study the implicit bias of optimization in robust empirical risk minimization (robust ERM) and its connection with robust generalization. In classification settings under adversarial …
Despite their impressive performance in classification, neural networks are known to be vulnerable to adversarial attacks. These attacks are small perturbations of the input data …
Training dataset biases are by far the most scrutinized factors when explaining algorithmic biases of neural networks. In contrast, hyperparameters related to the neural network …
In this thesis, we study two closely related directions: robustness and generalization in modern deep learning. Deep learning models based on empirical risk minimization are …
A Shachar - arXiv preprint arXiv:1012.5751, 2010 - arxiv.org
The Infinitesimal Calculus explores mainly two measurements: the instantaneous rates of change and the accumulation of quantities. This work shows that scientists, engineers …