A comprehensive survey on pretrained foundation models: A history from bert to chatgpt

C Zhou, Q Li, C Li, J Yu, Y Liu, G Wang… - arXiv preprint arXiv …, 2023 - arxiv.org
Pretrained Foundation Models (PFMs) are regarded as the foundation for various
downstream tasks with different data modalities. A PFM (eg, BERT, ChatGPT, and GPT-4) is …

[HTML][HTML] Pre-trained language models and their applications

H Wang, J Li, H Wu, E Hovy, Y Sun - Engineering, 2023 - Elsevier
Pre-trained language models have achieved striking success in natural language
processing (NLP), leading to a paradigm shift from supervised learning to pre-training …

Dinov2: Learning robust visual features without supervision

M Oquab, T Darcet, T Moutakanni, H Vo… - arXiv preprint arXiv …, 2023 - arxiv.org
The recent breakthroughs in natural language processing for model pretraining on large
quantities of data have opened the way for similar foundation models in computer vision …

[HTML][HTML] A comprehensive survey of image augmentation techniques for deep learning

M Xu, S Yoon, A Fuentes, DS Park - Pattern Recognition, 2023 - Elsevier
Although deep learning has achieved satisfactory performance in computer vision, a large
volume of images is required. However, collecting images is often expensive and …

[HTML][HTML] Self-supervised learning for medical image classification: a systematic review and implementation guidelines

SC Huang, A Pareek, M Jensen, MP Lungren… - NPJ Digital …, 2023 - nature.com
Advancements in deep learning and computer vision provide promising solutions for
medical image analysis, potentially improving healthcare and patient outcomes. However …

Transformer-based unsupervised contrastive learning for histopathological image classification

X Wang, S Yang, J Zhang, M Wang, J Zhang… - Medical image …, 2022 - Elsevier
A large-scale and well-annotated dataset is a key factor for the success of deep learning in
medical image analysis. However, assembling such large annotations is very challenging …

Masked feature prediction for self-supervised visual pre-training

C Wei, H Fan, S Xie, CY Wu, A Yuille… - Proceedings of the …, 2022 - openaccess.thecvf.com
Abstract We present Masked Feature Prediction (MaskFeat) for self-supervised pre-training
of video models. Our approach first randomly masks out a portion of the input sequence and …

Simmim: A simple framework for masked image modeling

Z Xie, Z Zhang, Y Cao, Y Lin, J Bao… - Proceedings of the …, 2022 - openaccess.thecvf.com
This paper presents SimMIM, a simple framework for masked image modeling. We have
simplified recently proposed relevant approaches, without the need for special designs …

Self-supervised pre-training of swin transformers for 3d medical image analysis

Y Tang, D Yang, W Li, HR Roth… - Proceedings of the …, 2022 - openaccess.thecvf.com
Abstract Vision Transformers (ViT) s have shown great performance in self-supervised
learning of global and local representations that can be transferred to downstream …

Test-time training with masked autoencoders

Y Gandelsman, Y Sun, X Chen… - Advances in Neural …, 2022 - proceedings.neurips.cc
Test-time training adapts to a new test distribution on the fly by optimizing a model for each
test input using self-supervision. In this paper, we use masked autoencoders for this one …