Fake it till you make it: Learning transferable representations from synthetic imagenet clones

MB Sarıyıldız, K Alahari, D Larlus… - Proceedings of the …, 2023 - openaccess.thecvf.com
Recent image generation models such as Stable Diffusion have exhibited an impressive
ability to generate fairly realistic images starting from a simple text prompt. Could such …

Efficientnetv2: Smaller models and faster training

M Tan, Q Le - International conference on machine learning, 2021 - proceedings.mlr.press
This paper introduces EfficientNetV2, a new family of convolutional networks that have faster
training speed and better parameter efficiency than previous models. To develop these …

Autoformer: Searching transformers for visual recognition

M Chen, H Peng, J Fu, H Ling - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
Recently, pure transformer-based models have shown great potentials for vision tasks such
as image classification and detection. However, the design of transformer networks is …

[PDF][PDF] The computational limits of deep learning

NC Thompson, K Greenewald, K Lee… - arXiv preprint arXiv …, 2020 - assets.pubpub.org
Deep learning's recent history has been one of achievement: from triumphing over humans
in the game of Go to world-leading performance in image classification, voice recognition …

A simple framework for contrastive learning of visual representations

T Chen, S Kornblith, M Norouzi… - … conference on machine …, 2020 - proceedings.mlr.press
This paper presents SimCLR: a simple framework for contrastive learning of visual
representations. We simplify recently proposed contrastive self-supervised learning …

Do adversarially robust imagenet models transfer better?

H Salman, A Ilyas, L Engstrom… - Advances in Neural …, 2020 - proceedings.neurips.cc
Transfer learning is a widely-used paradigm in deep learning, where models pre-trained on
standard datasets can be efficiently adapted to downstream tasks. Typically, better pre …

Learning vision from models rivals learning vision from data

Y Tian, L Fan, K Chen, D Katabi… - Proceedings of the …, 2024 - openaccess.thecvf.com
We introduce SynCLR a novel approach for learning visual representations exclusively from
synthetic images without any real data. We synthesize a large dataset of image captions …

[HTML][HTML] An overview of mixing augmentation methods and augmentation strategies

D Lewy, J Mańdziuk - Artificial Intelligence Review, 2023 - Springer
Abstract Deep Convolutional Neural Networks have made an incredible progress in many
Computer Vision tasks. This progress, however, often relies on the availability of large …

How well do self-supervised models transfer?

L Ericsson, H Gouk… - Proceedings of the IEEE …, 2021 - openaccess.thecvf.com
Self-supervised visual representation learning has seen huge progress recently, but no
large scale evaluation has compared the many models now available. We evaluate the …

Efficientnet: Rethinking model scaling for convolutional neural networks

M Tan, Q Le - International conference on machine learning, 2019 - proceedings.mlr.press
Abstract Convolutional Neural Networks (ConvNets) are commonly developed at a fixed
resource budget, and then scaled up for better accuracy if more resources are given. In this …