In this paper, we consider clustering based on the kernel principal component analysis (KPCA) for high-dimension, low-sample-size (HDLSS) data. We give theoretical reasons …
In this paper, we study distance covariance, Hilbert–Schmidt covariance (aka Hilbert– Schmidt independence criterion [In Advances in Neural Information Processing Systems …
Object Oriented Data Analysis is a framework that facilitates inter-disciplinary research through new terminology for discussing the often many possible approaches to the analysis …
Interpoint distance based two sample tests in high dimension Page 1 Bernoulli 27(2), 2021, 1189–1211 https://doi.org/10.3150/20-BEJ1270 Interpoint distance based two sample tests in …
SH Wang, SY Huang - Journal of Multivariate Analysis, 2022 - Elsevier
Principal component analysis (PCA) has long been a useful and important tool for dimension reduction. However, this method must be used with care under certain …
C Zhu, JL Wang - Journal of the Royal Statistical Society Series …, 2023 - academic.oup.com
Testing the homogeneity between two samples of functional data is an important task. While this is feasible for intensely measured functional data, we explain why it is challenging for …
LR Goldberg, A Papanicolaou, A Shkolnik - SIAM Journal on Financial …, 2022 - SIAM
We identify and correct excess dispersion in the leading eigenvector of a sample covariance matrix when the number of variables vastly exceeds the number of observations. Our …
K Yata, M Aoshima - Scandinavian Journal of Statistics, 2020 - Wiley Online Library
In this article, we consider clustering based on principal component analysis (PCA) for high‐ dimensional mixture models. We present theoretical reasons why PCA is effective for …