Aggregated residual transformations for deep neural networks

S Xie, R Girshick, P Dollár, Z Tu… - Proceedings of the IEEE …, 2017 - openaccess.thecvf.com
We present a simple, highly modularized network architecture for image classification. Our
network is constructed by repeating a building block that aggregates a set of transformations …

Difficulty-based sampling for debiased contrastive representation learning

T Jang, X Wang - Proceedings of the IEEE/CVF Conference …, 2023 - openaccess.thecvf.com
Contrastive learning is a self-supervised representation learning method that achieves
milestone performance in various classification tasks. However, due to its unsupervised …

Semi-supervised stylegan for disentanglement learning

W Nie, T Karras, A Garg, S Debnath… - International …, 2020 - proceedings.mlr.press
Disentanglement learning is crucial for obtaining disentangled representations and
controllable generation. Current disentanglement methods face several inherent limitations …

Learning deep representation for imbalanced classification

C Huang, Y Li, CC Loy, X Tang - Proceedings of the IEEE …, 2016 - openaccess.thecvf.com
Data in vision domain often exhibit highly-skewed class distribution, ie, most data belong to
a few majority classes, while the minority classes only contain a scarce amount of instances …

Unsupervised learning by predicting noise

P Bojanowski, A Joulin - International Conference on …, 2017 - proceedings.mlr.press
Convolutional neural networks provide visual features that perform remarkably well in many
computer vision applications. However, training these networks requires significant amounts …

Iterative visual reasoning beyond convolutions

X Chen, LJ Li, L Fei-Fei… - Proceedings of the IEEE …, 2018 - openaccess.thecvf.com
We present a novel framework for iterative visual reasoning. Our framework goes beyond
current recognition systems that lack the capability to reason beyond stack of convolutions …

Semantic jitter: Dense supervision for visual comparisons via synthetic images

A Yu, K Grauman - … of the IEEE International Conference on …, 2017 - openaccess.thecvf.com
Distinguishing subtle differences in attributes is valuable, yet learning to make visual
comparisons remains nontrivial. Not only is the number of possible comparisons quadratic in …

Characterizing and avoiding negative transfer

Z Wang, Z Dai, B Póczos… - Proceedings of the IEEE …, 2019 - openaccess.thecvf.com
When labeled data is scarce for a specific target task, transfer learning often offers an
effective solution by utilizing data from a related source task. However, when transferring …

Autoshufflenet: Learning permutation matrices via an exact lipschitz continuous penalty in deep convolutional neural networks

J Lyu, S Zhang, Y Qi, J Xin - Proceedings of the 26th ACM SIGKDD …, 2020 - dl.acm.org
ShuffleNet is a state-of-the-art light weight convolutional neural network architecture. Its
basic operations include group, channel-wise convolution and channel shuffling. However …

Challenges in disentangling independent factors of variation

A Szabó, Q Hu, T Portenier, M Zwicker… - arXiv preprint arXiv …, 2017 - arxiv.org
We study the problem of building models that disentangle independent factors of variation.
Such models could be used to encode features that can efficiently be used for classification …