T Jang, X Wang - Proceedings of the IEEE/CVF Conference …, 2023 - openaccess.thecvf.com
Contrastive learning is a self-supervised representation learning method that achieves milestone performance in various classification tasks. However, due to its unsupervised …
W Nie, T Karras, A Garg, S Debnath… - International …, 2020 - proceedings.mlr.press
Disentanglement learning is crucial for obtaining disentangled representations and controllable generation. Current disentanglement methods face several inherent limitations …
Data in vision domain often exhibit highly-skewed class distribution, ie, most data belong to a few majority classes, while the minority classes only contain a scarce amount of instances …
P Bojanowski, A Joulin - International Conference on …, 2017 - proceedings.mlr.press
Convolutional neural networks provide visual features that perform remarkably well in many computer vision applications. However, training these networks requires significant amounts …
X Chen, LJ Li, L Fei-Fei… - Proceedings of the IEEE …, 2018 - openaccess.thecvf.com
We present a novel framework for iterative visual reasoning. Our framework goes beyond current recognition systems that lack the capability to reason beyond stack of convolutions …
A Yu, K Grauman - … of the IEEE International Conference on …, 2017 - openaccess.thecvf.com
Distinguishing subtle differences in attributes is valuable, yet learning to make visual comparisons remains nontrivial. Not only is the number of possible comparisons quadratic in …
Z Wang, Z Dai, B Póczos… - Proceedings of the IEEE …, 2019 - openaccess.thecvf.com
When labeled data is scarce for a specific target task, transfer learning often offers an effective solution by utilizing data from a related source task. However, when transferring …
J Lyu, S Zhang, Y Qi, J Xin - Proceedings of the 26th ACM SIGKDD …, 2020 - dl.acm.org
ShuffleNet is a state-of-the-art light weight convolutional neural network architecture. Its basic operations include group, channel-wise convolution and channel shuffling. However …
We study the problem of building models that disentangle independent factors of variation. Such models could be used to encode features that can efficiently be used for classification …