Enhancing sharpness-aware optimization through variance suppression

B Li, G Giannakis - Advances in Neural Information …, 2024 - proceedings.neurips.cc
Sharpness-aware minimization (SAM) has well documented merits in enhancing
generalization of deep neural networks, even without sizable data augmentation. Embracing …

Constrained submodular maximization via new bounds for dr-submodular functions

N Buchbinder, M Feldman - Proceedings of the 56th Annual ACM …, 2024 - dl.acm.org
Submodular maximization under various constraints is a fundamental problem studied
continuously, in both computer science and operations research, since the late 1970's. A …

Complexity of single loop algorithms for nonlinear programming with stochastic objective and constraints

A Alacaoglu, SJ Wright - International Conference on …, 2024 - proceedings.mlr.press
We analyze the sample complexity of single-loop quadratic penalty and augmented
Lagrangian algorithms for solving nonconvex optimization problems with functional equality …

A unified approach for maximizing continuous DR-submodular functions

M Pedramfar, C Quinn… - Advances in Neural …, 2024 - proceedings.neurips.cc
This paper presents a unified approach for maximizing continuous DR-submodular functions
that encompasses a range of settings and oracle access types. Our approach includes a …

Stochastic Frank-Wolfe: Unified Analysis and Zoo of Special Cases

R Nazykov, A Shestakov, V Solodkin… - International …, 2024 - proceedings.mlr.press
Abstract The Conditional Gradient (or Frank-Wolfe) method is one of the most well-known
methods for solving constrained optimization problems appearing in various machine …

Constrained Stochastic Recursive Momentum Successive Convex Approximation

BM Idrees, L Arora, K Rajawat - arXiv preprint arXiv:2404.11790, 2024 - arxiv.org
We consider stochastic optimization problems with functional constraints. If the objective and
constraint functions are not convex, the classical stochastic approximation algorithms such …

Unified Projection-Free Algorithms for Adversarial DR-Submodular Optimization

M Pedramfar, YY Nadew, CJ Quinn… - arXiv preprint arXiv …, 2024 - arxiv.org
This paper introduces unified projection-free Frank-Wolfe type algorithms for adversarial
continuous DR-submodular optimization, spanning scenarios such as full information and …

Boosting Gradient Ascent for Continuous DR-submodular Maximization

Q Zhang, Z Wan, Z Deng, Z Chen, X Sun… - arXiv preprint arXiv …, 2024 - arxiv.org
Projected Gradient Ascent (PGA) is the most commonly used optimization scheme in
machine learning and operations research areas. Nevertheless, numerous studies and …

Online non-monotone diminishing return submodular maximization in the bandit setting

J Ju, X Wang, D Xu - Journal of Global Optimization, 2024 - Springer
In this paper, we study online diminishing return submodular (DR-submodular for short)
maximization in the bandit setting. Our focus is on problems where the reward functions can …

Differentially Private Federated Frank-Wolfe

R Francis, SP Chepuri - ICASSP 2024-2024 IEEE International …, 2024 - ieeexplore.ieee.org
In this paper, we propose DP-FedFW, a novel Frank-Wolfe based federated learning
algorithm with local (ϵ, δ)-differential privacy (DP) guarantees in a constrained learning …