Unsupervised representation learning for time series: A review

Q Meng, H Qian, Y Liu, Y Xu, Z Shen, L Cui - arXiv preprint arXiv …, 2023 - arxiv.org
Unsupervised representation learning approaches aim to learn discriminative feature
representations from unlabeled data, without the requirement of annotating every sample …

Improving self-supervised learning by characterizing idealized representations

Y Dubois, S Ermon, TB Hashimoto… - Advances in Neural …, 2022 - proceedings.neurips.cc
Despite the empirical successes of self-supervised learning (SSL) methods, it is unclear
what characteristics of their representations lead to high downstream accuracies. In this …

Does Negative Sampling Matter? A Review with Insights into its Theory and Applications

Z Yang, M Ding, T Huang, Y Cen, J Song… - … on Pattern Analysis …, 2024 - ieeexplore.ieee.org
Negative sampling has swiftly risen to prominence as a focal point of research, with wide-
ranging applications spanning machine learning, computer vision, natural language …

Infonce loss provably learns cluster-preserving representations

A Parulekar, L Collins, K Shanmugam… - The Thirty Sixth …, 2023 - proceedings.mlr.press
The goal of contrasting learning is to learn a representation that preserves underlying
clusters by keeping samples with similar content, eg the “dogness” of a dog, close to each …

On the surrogate gap between contrastive and supervised losses

H Bao, Y Nagano, K Nozawa - International Conference on …, 2022 - proceedings.mlr.press
Contrastive representation learning encourages data representation to make semantically
similar pairs closer than randomly drawn negative samples, which has been successful in …

PSNEA: Pseudo-siamese network for entity alignment between multi-modal knowledge graphs

W Ni, Q Xu, Y Jiang, Z Cao, X Cao… - Proceedings of the 31st …, 2023 - dl.acm.org
Multi-modal entity alignment aims to identify entities that refer to the same concept in the real
world across a plethora of multi-modal knowledge graphs (MMKGs). Most existing methods …

[HTML][HTML] MSCDA: Multi-level semantic-guided contrast improves unsupervised domain adaptation for breast MRI segmentation in small datasets

S Kuang, HC Woodruff, R Granzier, TJA van Nijnatten… - Neural Networks, 2023 - Elsevier
Deep learning (DL) applied to breast tissue segmentation in magnetic resonance imaging
(MRI) has received increased attention in the last decade, however, the domain shift which …

EMVCC: Enhanced multi-view contrastive clustering for hyperspectral images

F Luo, Y Liu, X Gong, Z Nan, T Guo - Proceedings of the 32nd ACM …, 2024 - dl.acm.org
Cross-view consensus representation plays a critical role in hyperspectral images (HSIs)
clustering. Recent multi-view contrastive cluster methods utilize contrastive loss to extract …

Fg-uap: Feature-gathering universal adversarial perturbation

Z Ye, X Cheng, X Huang - 2023 International Joint Conference …, 2023 - ieeexplore.ieee.org
Deep Neural Networks (DNNs) are susceptible to elaborately designed perturbations,
whether such perturbations are dependent or independent of images. The latter one, called …

Contrastive Learning for Inference in Dialogue

E Ishii, Y Xu, B Wilie, Z Ji, H Lovenia, W Chung… - arXiv preprint arXiv …, 2023 - arxiv.org
Inference, especially those derived from inductive processes, is a crucial component in our
conversation to complement the information implicitly or explicitly conveyed by a speaker …