W Huang, M Ye, Z Shi, H Li, B Du - 2023 IEEE/CVF Conference …, 2023 - ieeexplore.ieee.org
Federated learning shows a bright promise as a privacy-preserving collaborative learning technique. However, prevalent solutions mainly focus on all private data sampled from the …
Pre-trained language models (LMs) store knowledge in their parameters and can generate informative responses when used in conversational systems. However, LMs suffer from the …
R Zhu, B Zhao, J Liu, Z Sun… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
Contrastive learning, which aims at minimizing the distance between positive pairs while maximizing that of negative ones, has been widely and successfully applied in unsupervised …
C Park, S Yun, S Chun - Advances in Neural Information …, 2022 - proceedings.neurips.cc
We propose the first unified theoretical analysis of mixed sample data augmentation (MSDA), such as Mixup and CutMix. Our theoretical results show that regardless of the …
The recently advanced unsupervised learning approaches use the siamese-like framework to compare two" views" from the same image for learning representations. Making the two …
In this work we present DREAM, an fMRI-to-image method for reconstructing viewed images from brain activities, grounded on fundamental knowledge of the human visual system. We …
Abstract Data mixing (eg, Mixup, Cutmix, ResizeMix) is an essential component for advancing recognition models. In this paper, we focus on studying its effectiveness in the …
S Zhang, M Liu, J Yan, H Zhang, L Huang… - Proceedings of the 28th …, 2022 - dl.acm.org
Negative pairs, especially hard negatives as combined with common negatives (easy to discriminate), are essential in contrastive learning, which plays a role of avoiding …
In light of the success of contrastive learning in the image domain, current self-supervised video representation learning methods usually employ contrastive loss to facilitate video …