Muter: Machine unlearning on adversarially trained models

J Liu, M Xue, J Lou, X Zhang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract Machine unlearning is an emerging task of removing the influence of selected
training datapoints from a trained model upon data deletion requests, which echoes the …

Cubic regularized Newton method for the saddle point models: A global and local convergence analysis

K Huang, J Zhang, S Zhang - Journal of Scientific Computing, 2022 - Springer
In this paper, we propose a cubic regularized Newton method for solving the convex-
concave minimax saddle point problems. At each iteration, a cubic regularized saddle point …

Train simultaneously, generalize better: Stability of gradient-based minimax learners

F Farnia, A Ozdaglar - International Conference on Machine …, 2021 - proceedings.mlr.press
The success of minimax learning problems of generative adversarial networks (GANs) has
been observed to depend on the minimax optimization algorithm used for their training. This …

Certified minimax unlearning with generalization rates and deletion capacity

J Liu, J Lou, Z Qin, K Ren - Advances in Neural Information …, 2024 - proceedings.neurips.cc
We study the problem of $(\epsilon,\delta) $-certified machine unlearning for minimax
models. Most of the existing works focus on unlearning from standard statistical learning …

Lifted primal-dual method for bilinearly coupled smooth minimax optimization

KK Thekumparampil, N He… - … Conference on Artificial …, 2022 - proceedings.mlr.press
We study the bilinearly coupled minimax problem: $\min_ {x}\max_ {y} f (x)+ y^\top A xh (y) $,
where $ f $ and $ h $ are both strongly convex smooth functions and admit first-order …

FedHybrid: A hybrid federated optimization method for heterogeneous clients

X Niu, E Wei - IEEE Transactions on Signal Processing, 2023 - ieeexplore.ieee.org
We consider a distributed consensus optimization problem over a server-client (federated)
network, where all clients are connected to a central server. Current distributed algorithms …

Global convergence to local minmax equilibrium in classes of nonconvex zero-sum games

T Fiez, L Ratliff, E Mazumdar… - Advances in Neural …, 2021 - proceedings.neurips.cc
We study gradient descent-ascent learning dynamics with timescale separation ($\tau $-
GDA) in unconstrained continuous action zero-sum games where the minimizing player …

[PDF][PDF] Local convergence analysis of gradient descent ascent with finite timescale separation

T Fiez, LJ Ratliff - Proceedings of the International Conference on …, 2021 - par.nsf.gov
We study the role that a finite timescale separation parameter τ has on gradient descent-
ascent in non-convex, non-concave zero-sum games where the learning rate of player 1 is …

Higher-order methods for convex-concave min-max optimization and monotone variational inequalities

B Bullins, KA Lai - SIAM Journal on Optimization, 2022 - SIAM
We provide improved convergence rates for constrained convex-concave min-max problems
and monotone variational inequalities with higher-order smoothness. In min-max settings …

Closed-form machine unlearning for matrix factorization

S Zhang, J Lou, L Xiong, X Zhang, J Liu - Proceedings of the 32nd ACM …, 2023 - dl.acm.org
Matrix factorization (MF) is a fundamental model in data mining and machine learning, which
finds wide applications in diverse application areas, including recommendation systems with …