Contrastive self-supervised learning: review, progress, challenges and future research directions

P Kumar, P Rawat, S Chauhan - International Journal of Multimedia …, 2022 - Springer
In the last decade, deep supervised learning has had tremendous success. However, its
flaws, such as its dependency on manual and costly annotations on large datasets and …

Making sense of meaning: A survey on metrics for semantic and goal-oriented communication

TM Getu, G Kaddoum, M Bennis - IEEE Access, 2023 - ieeexplore.ieee.org
Semantic communication (SemCom) aims to convey the meaning behind a transmitted
message by transmitting only semantically-relevant information. This semantic-centric …

Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning

VW Liang, Y Zhang, Y Kwon… - Advances in Neural …, 2022 - proceedings.neurips.cc
We present modality gap, an intriguing geometric phenomenon of the representation space
of multi-modal models. Specifically, we show that different data modalities (eg images and …

Text and code embeddings by contrastive pre-training

A Neelakantan, T Xu, R Puri, A Radford, JM Han… - arXiv preprint arXiv …, 2022 - arxiv.org
Text embeddings are useful features in many applications such as semantic search and
computing text similarity. Previous work typically trains models customized for different use …

Towards universal sequence representation learning for recommender systems

Y Hou, S Mu, WX Zhao, Y Li, B Ding… - Proceedings of the 28th …, 2022 - dl.acm.org
In order to develop effective sequential recommenders, a series of sequence representation
learning (SRL) methods are proposed to model historical user behaviors. Most existing SRL …

Simcse: Simple contrastive learning of sentence embeddings

T Gao, X Yao, D Chen - arXiv preprint arXiv:2104.08821, 2021 - arxiv.org
This paper presents SimCSE, a simple contrastive learning framework that greatly advances
state-of-the-art sentence embeddings. We first describe an unsupervised approach, which …

Consert: A contrastive framework for self-supervised sentence representation transfer

Y Yan, R Li, S Wang, F Zhang, W Wu, W Xu - arXiv preprint arXiv …, 2021 - arxiv.org
Learning high-quality sentence representations benefits a wide range of natural language
processing tasks. Though BERT-based pre-trained language models achieve high …

DiffCSE: Difference-based contrastive learning for sentence embeddings

YS Chuang, R Dangovski, H Luo, Y Zhang… - arXiv preprint arXiv …, 2022 - arxiv.org
We propose DiffCSE, an unsupervised contrastive learning framework for learning sentence
embeddings. DiffCSE learns sentence embeddings that are sensitive to the difference …

Contrastive learning for representation degeneration problem in sequential recommendation

R Qiu, Z Huang, H Yin, Z Wang - … conference on web search and data …, 2022 - dl.acm.org
Recent advancements of sequential deep learning models such as Transformer and BERT
have significantly facilitated the sequential recommendation. However, according to our …

Whitening sentence representations for better semantics and faster retrieval

J Su, J Cao, W Liu, Y Ou - arXiv preprint arXiv:2103.15316, 2021 - arxiv.org
Pre-training models such as BERT have achieved great success in many natural language
processing tasks. However, how to obtain better sentence representation through these pre …