Unsupervised representation learning approaches aim to learn discriminative feature representations from unlabeled data, without the requirement of annotating every sample …
In this paper, we contend that the objective of representation learning is to compress and transform the distribution of the data, say sets of tokens, towards a mixture of low …
In a wide range of multimodal tasks, contrastive learning has become a particularly appealing approach since it can successfully learn representations from abundant …
Y Mo, Y Lei, J Shen, X Shi… - … on Machine Learning, 2023 - proceedings.mlr.press
Unsupervised multiplex graph representation learning (UMGRL) has received increasing interest, but few works simultaneously focused on the common and private information …
Y Zhao, Z Li, X Guo, Y Lu - Advances in Neural Information …, 2022 - proceedings.neurips.cc
Temporal modeling is crucial for various video learning tasks. Most recent approaches employ either factorized (2D+ 1D) or joint (3D) spatial-temporal operations to extract …
J Jiang, N Zheng - … of the IEEE/CVF Conference on …, 2023 - openaccess.thecvf.com
Recently, finetuning pretrained vision-language models (VLMs) has been a prevailing paradigm for achieving state-of-the-art performance in VQA. However, as VLMs scale, it …
J Xu, S Chen, Y Ren, X Shi, H Shen… - Advances in Neural …, 2024 - proceedings.neurips.cc
Recently, numerous studies have demonstrated the effectiveness of contrastive learning (CL), which learns feature representations by pulling in positive samples while pushing …
The success of graph neural networks (GNNs) provokes the question about explainability:“Which fraction of the input graph is the most determinant of the prediction?” …