[HTML][HTML] To compress or not to compress—self-supervised learning and information theory: A review

R Shwartz Ziv, Y LeCun - Entropy, 2024 - mdpi.com
Deep neural networks excel in supervised learning tasks but are constrained by the need for
extensive labeled data. Self-supervised learning emerges as a promising alternative …

Unsupervised representation learning for time series: A review

Q Meng, H Qian, Y Liu, Y Xu, Z Shen, L Cui - arXiv preprint arXiv …, 2023 - arxiv.org
Unsupervised representation learning approaches aim to learn discriminative feature
representations from unlabeled data, without the requirement of annotating every sample …

White-box transformers via sparse rate reduction

Y Yu, S Buchanan, D Pai, T Chu, Z Wu… - Advances in …, 2023 - proceedings.neurips.cc
In this paper, we contend that the objective of representation learning is to compress and
transform the distribution of the data, say sets of tokens, towards a mixture of low …

Masked image modeling with local multi-scale reconstruction

H Wang, Y Tang, Y Wang, J Guo… - Proceedings of the …, 2023 - openaccess.thecvf.com
Abstract Masked Image Modeling (MIM) achieves outstanding success in self-supervised
representation learning. Unfortunately, MIM models typically have huge computational …

Factorized contrastive learning: Going beyond multi-view redundancy

PP Liang, Z Deng, MQ Ma, JY Zou… - Advances in …, 2024 - proceedings.neurips.cc
In a wide range of multimodal tasks, contrastive learning has become a particularly
appealing approach since it can successfully learn representations from abundant …

Disentangled multiplex graph representation learning

Y Mo, Y Lei, J Shen, X Shi… - … on Machine Learning, 2023 - proceedings.mlr.press
Unsupervised multiplex graph representation learning (UMGRL) has received increasing
interest, but few works simultaneously focused on the common and private information …

Alignment-guided temporal attention for video action recognition

Y Zhao, Z Li, X Guo, Y Lu - Advances in Neural Information …, 2022 - proceedings.neurips.cc
Temporal modeling is crucial for various video learning tasks. Most recent approaches
employ either factorized (2D+ 1D) or joint (3D) spatial-temporal operations to extract …

MixPHM: redundancy-aware parameter-efficient tuning for low-resource visual question answering

J Jiang, N Zheng - … of the IEEE/CVF Conference on …, 2023 - openaccess.thecvf.com
Recently, finetuning pretrained vision-language models (VLMs) has been a prevailing
paradigm for achieving state-of-the-art performance in VQA. However, as VLMs scale, it …

Self-weighted contrastive learning among multiple views for mitigating representation degeneration

J Xu, S Chen, Y Ren, X Shi, H Shen… - Advances in Neural …, 2024 - proceedings.neurips.cc
Recently, numerous studies have demonstrated the effectiveness of contrastive learning
(CL), which learns feature representations by pulling in positive samples while pushing …

Rethinking explaining graph neural networks via non-parametric subgraph matching

F Wu, S Li, X Jin, Y Jiang, D Radev… - … on Machine Learning, 2023 - proceedings.mlr.press
The success of graph neural networks (GNNs) provokes the question about
explainability:“Which fraction of the input graph is the most determinant of the prediction?” …