Domain adaptation for time series under feature and label shifts

H He, O Queen, T Koker, C Cuevas… - International …, 2023 - proceedings.mlr.press
Unsupervised domain adaptation (UDA) enables the transfer of models trained on source
domains to unlabeled target domains. However, transferring complex time series models …

Solving a class of non-convex minimax optimization in federated learning

X Wu, J Sun, Z Hu, A Zhang… - Advances in Neural …, 2024 - proceedings.neurips.cc
The minimax problems arise throughout machine learning applications, ranging from
adversarial training and policy evaluation in reinforcement learning to AUROC …

Anderson Acceleration with Truncated Gram–Schmidt

Z Tang, T Xu, H He, Y Saad, Y Xi - SIAM Journal on Matrix Analysis and …, 2024 - SIAM
Anderson acceleration (AA) is a popular algorithm designed to enhance the convergence of
fixed-point iterations. In this paper, we introduce a variant of AA based on a truncated Gram …

Convergence Analysis for Restarted Anderson Mixing and Beyond

F Wei, C Bao, Y Liu, G Yang - arXiv preprint arXiv:2307.02062, 2023 - arxiv.org
Anderson mixing (AM) is a classical method that can accelerate fixed-point iterations by
exploring historical information. Despite the successful application of AM in scientific …

nltgcr: A class of nonlinear acceleration procedures based on conjugate residuals

H He, Z Tang, S Zhao, Y Saad, Y Xi - SIAM Journal on Matrix Analysis and …, 2024 - SIAM
This paper develops a new class of nonlinear acceleration algorithms based on extending
conjugate residual-type procedures from linear to nonlinear equations. The main algorithm …

Diffusion Optimistic Learning for Min-Max Optimization

H Cai, SA Alghunaim, AH Sayed - ICASSP 2024-2024 IEEE …, 2024 - ieeexplore.ieee.org
This work introduces and studies the convergence of a stochastic diffusion-optimistic
learning (DOL) strategy for solving distributed nonconvex (NC) and Polyak–Lojasiewicz (PL) …

A variant of anderson mixing with minimal memory size

F Wei, C Bao, Y Liu, G Yang - Advances in Neural …, 2022 - proceedings.neurips.cc
Anderson mixing (AM) is a useful method that can accelerate fixed-point iterations by
exploring the information from historical iterations. Despite its numerical success in various …

Reducing Operator Complexity of Galerkin Coarse-grid Operators with Machine Learning

R Huang, K Chang, H He, R Li, Y Xi - SIAM Journal on Scientific Computing, 2024 - SIAM
We propose a data-driven and machine-learning-based approach to compute non-Galerkin
coarse-grid operators in multigrid (MG) methods, addressing the well-known issue of …

Reducing operator complexity in algebraic multigrid with machine learning approaches

R Huang, K Chang, H He, R Li, Y Xi - arXiv preprint arXiv:2307.07695, 2023 - arxiv.org
We propose a data-driven and machine-learning-based approach to compute non-Galerkin
coarse-grid operators in algebraic multigrid (AMG) methods, addressing the well-known …

An efficient nonlinear acceleration method that exploits symmetry of the hessian

H He, S Zhao, Z Tang, JC Ho, Y Saad, Y Xi - arXiv preprint arXiv …, 2022 - arxiv.org
Nonlinear acceleration methods are powerful techniques to speed up fixed-point iterations.
However, many acceleration methods require storing a large number of previous iterates …