Randomness in neural networks: an overview

S Scardapane, D Wang - Wiley Interdisciplinary Reviews: Data …, 2017 - Wiley Online Library
Neural networks, as powerful tools for data mining and knowledge engineering, can learn
from data to build feature‐based classifiers and nonlinear predictive models. Training neural …

[HTML][HTML] A review of kernel methods for feature extraction in nonlinear process monitoring

KE Pilario, M Shafiee, Y Cao, L Lao, SH Yang - Processes, 2019 - mdpi.com
Kernel methods are a class of learning machines for the fast recognition of nonlinear
patterns in any data set. In this paper, the applications of kernel methods for feature …

A survey of multi-view representation learning

Y Li, M Yang, Z Zhang - IEEE transactions on knowledge and …, 2018 - ieeexplore.ieee.org
Recently, multi-view representation learning has become a rapidly growing direction in
machine learning and data mining areas. This paper introduces two categories for multi …

On deep multi-view representation learning

W Wang, R Arora, K Livescu… - … conference on machine …, 2015 - proceedings.mlr.press
We consider learning representations (features) in the setting in which we have access to
multiple unlabeled views of the data for representation learning while only one view is …

Random features for kernel approximation: A survey on algorithms, theory, and beyond

F Liu, X Huang, Y Chen… - IEEE Transactions on …, 2021 - ieeexplore.ieee.org
The class of random features is one of the most popular techniques to speed up kernel
methods in large-scale problems. Related works have been recognized by the NeurIPS Test …

Kernel operations on the GPU, with autodiff, without memory overflows

B Charlier, J Feydy, JA Glaunes, FD Collin… - Journal of Machine …, 2021 - jmlr.org
The KeOps library provides a fast and memory-efficient GPU support for tensors whose
entries are given by a mathematical formula, such as kernel and distance matrices. KeOps …

Generalization error of random feature and kernel methods: hypercontractivity and kernel matrix concentration

S Mei, T Misiakiewicz, A Montanari - Applied and Computational Harmonic …, 2022 - Elsevier
Consider the classical supervised learning problem: we are given data (yi, xi), i≤ n, with yia
response and xi∈ X a covariates vector, and try to learn a model f ˆ: X→ R to predict future …

Generalization properties of learning with random features

A Rudi, L Rosasco - Advances in neural information …, 2017 - proceedings.neurips.cc
We study the generalization properties of ridge regression with random features in the
statistical learning framework. We show for the first time that $ O (1/\sqrt {n}) $ learning …

Less is more: Nyström computational regularization

A Rudi, R Camoriano… - Advances in neural …, 2015 - proceedings.neurips.cc
We study Nyström type subsampling approaches to large scale kernel methods, and prove
learning bounds in the statistical learning setting, where random sampling and high …

Fast and scalable polynomial kernels via explicit feature maps

N Pham, R Pagh - Proceedings of the 19th ACM SIGKDD international …, 2013 - dl.acm.org
Approximation of non-linear kernels using random feature mapping has been successfully
employed in large-scale data analysis applications, accelerating the training of kernel …