Large-margin contrastive learning with distance polarization regularizer

S Chen, G Niu, C Gong, J Li, J Yang… - International …, 2021 - proceedings.mlr.press
Abstract\emph {Contrastive learning}(CL) pretrains models in a pairwise manner, where
given a data point, other data points are all regarded as dissimilar, including some that …

Kernel regression in high dimensions: Refined analysis beyond double descent

F Liu, Z Liao, J Suykens - International Conference on …, 2021 - proceedings.mlr.press
In this paper, we provide a precise characterization of generalization properties of high
dimensional kernel ridge regression across the under-and over-parameterized regimes …

Wonder: Weighted one-shot distributed ridge regression in high dimensions

E Dobriban, Y Sheng - Journal of Machine Learning Research, 2020 - jmlr.org
In many areas, practitioners need to analyze large data sets that challenge conventional
single-machine computing. To scale up data analysis, distributed and parallel computing …

[HTML][HTML] Distributed kernel gradient descent algorithm for minimum error entropy principle

T Hu, Q Wu, DX Zhou - Applied and Computational Harmonic Analysis, 2020 - Elsevier
Distributed learning based on the divide and conquer approach is a powerful tool for big
data processing. We introduce a distributed kernel gradient descent algorithm for the …

Optimal learning with Gaussians and correntropy loss

F Lv, J Fan - Analysis and Applications, 2021 - World Scientific
Correntropy-based learning has achieved great success in practice during the last decades.
It is originated from information-theoretic learning and provides an alternative to classical …

Distributed least squares prediction for functional linear regression

H Tong - Inverse Problems, 2021 - iopscience.iop.org
To cope with the challenges of memory bottleneck and algorithmic scalability when massive
data sets are involved, we propose a distributed least squares procedure in the framework of …

[HTML][HTML] Optimal learning rates for distribution regression

Z Fang, ZC Guo, DX Zhou - Journal of complexity, 2020 - Elsevier
We study a learning algorithm for distribution regression with regularized least squares. This
algorithm, which contains two stages of sampling, aims at regressing from distributions to …

Optimal rates of distributed regression with imperfect kernels

H Sun, Q Wu - Journal of Machine Learning Research, 2021 - jmlr.org
Distributed machine learning systems have been receiving increasing attentions for their
efficiency to process large scale data. Many distributed frameworks have been proposed for …

Stability-based generalization analysis of distributed learning algorithms for big data

X Wu, J Zhang, FY Wang - IEEE Transactions on Neural …, 2019 - ieeexplore.ieee.org
As one of the efficient approaches to deal with big data, divide-and-conquer distributed
algorithms, such as the distributed kernel regression, bootstrap, structured perception …

Theoretical analysis of divide-and-conquer ERM: From the perspective of multi-view

Y Liao, Y Liu, S Liao, Q Hu, J Dang - Information Fusion, 2024 - Elsevier
Theoretical analysis of the divide-and-conquer based distributed learning with least square
loss in the reproducing kernel Hilbert space (RKHS) have recently been explored within the …