Self-supervised learning for recommender systems: A survey

J Yu, H Yin, X Xia, T Chen, J Li… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
In recent years, neural architecture-based recommender systems have achieved
tremendous success, but they still fall short of expectation when dealing with highly sparse …

A Survey on Self-supervised Learning: Algorithms, Applications, and Future Trends

J Gui, T Chen, J Zhang, Q Cao, Z Sun… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Deep supervised learning algorithms typically require a large volume of labeled data to
achieve satisfactory performance. However, the process of collecting and labeling such data …

One fits all: Power general time series analysis by pretrained lm

T Zhou, P Niu, L Sun, R Jin - Advances in neural …, 2023 - proceedings.neurips.cc
Although we have witnessed great success of pre-trained models in natural language
processing (NLP) and computer vision (CV), limited progress has been made for general …

Are graph augmentations necessary? simple graph contrastive learning for recommendation

J Yu, H Yin, X Xia, T Chen, L Cui… - Proceedings of the 45th …, 2022 - dl.acm.org
Contrastive learning (CL) recently has spurred a fruitful line of research in the field of
recommendation, since its ability to extract self-supervised signals from the raw data is well …

Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning

VW Liang, Y Zhang, Y Kwon… - Advances in Neural …, 2022 - proceedings.neurips.cc
We present modality gap, an intriguing geometric phenomenon of the representation space
of multi-modal models. Specifically, we show that different data modalities (eg images and …

Improving graph collaborative filtering with neighborhood-enriched contrastive learning

Z Lin, C Tian, Y Hou, WX Zhao - … of the ACM web conference 2022, 2022 - dl.acm.org
Recently, graph collaborative filtering methods have been proposed as an effective
recommendation approach, which can capture users' preference over items by modeling the …

Vision-language pre-training with triple contrastive learning

J Yang, J Duan, S Tran, Y Xu… - Proceedings of the …, 2022 - openaccess.thecvf.com
Vision-language representation learning largely benefits from image-text alignment through
contrastive losses (eg, InfoNCE loss). The success of this alignment strategy is attributed to …

On the opportunities and risks of foundation models

R Bommasani, DA Hudson, E Adeli, R Altman… - arXiv preprint arXiv …, 2021 - arxiv.org
AI is undergoing a paradigm shift with the rise of models (eg, BERT, DALL-E, GPT-3) that are
trained on broad data at scale and are adaptable to a wide range of downstream tasks. We …

Delving into out-of-distribution detection with vision-language representations

Y Ming, Z Cai, J Gu, Y Sun, W Li… - Advances in neural …, 2022 - proceedings.neurips.cc
Recognizing out-of-distribution (OOD) samples is critical for machine learning systems
deployed in the open world. The vast majority of OOD detection methods are driven by a …

[HTML][HTML] Learnable latent embeddings for joint behavioural and neural analysis

S Schneider, JH Lee, MW Mathis - Nature, 2023 - nature.com
Mapping behavioural actions to neural activity is a fundamental goal of neuroscience. As our
ability to record large neural and behavioural data increases, there is growing interest in …