Submodular maximization under various constraints is a fundamental problem studied continuously, in both computer science and operations research, since the late 1970's. A …
A Alacaoglu, SJ Wright - International Conference on …, 2024 - proceedings.mlr.press
We analyze the sample complexity of single-loop quadratic penalty and augmented Lagrangian algorithms for solving nonconvex optimization problems with functional equality …
M Pedramfar, C Quinn… - Advances in Neural …, 2024 - proceedings.neurips.cc
This paper presents a unified approach for maximizing continuous DR-submodular functions that encompasses a range of settings and oracle access types. Our approach includes a …
Abstract The Conditional Gradient (or Frank-Wolfe) method is one of the most well-known methods for solving constrained optimization problems appearing in various machine …
We consider stochastic optimization problems with functional constraints. If the objective and constraint functions are not convex, the classical stochastic approximation algorithms such …
This paper introduces unified projection-free Frank-Wolfe type algorithms for adversarial continuous DR-submodular optimization, spanning scenarios such as full information and …
Projected Gradient Ascent (PGA) is the most commonly used optimization scheme in machine learning and operations research areas. Nevertheless, numerous studies and …
J Ju, X Wang, D Xu - Journal of Global Optimization, 2024 - Springer
In this paper, we study online diminishing return submodular (DR-submodular for short) maximization in the bandit setting. Our focus is on problems where the reward functions can …
R Francis, SP Chepuri - ICASSP 2024-2024 IEEE International …, 2024 - ieeexplore.ieee.org
In this paper, we propose DP-FedFW, a novel Frank-Wolfe based federated learning algorithm with local (ϵ, δ)-differential privacy (DP) guarantees in a constrained learning …