Machine and deep learning methods for radiomics

M Avanzo, L Wei, J Stancanello, M Vallieres… - Medical …, 2020 - Wiley Online Library
Radiomics is an emerging area in quantitative image analysis that aims to relate large‐scale
extracted imaging information to clinical and biological endpoints. The development of …

Domain adaptation for visual applications: A comprehensive survey

G Csurka - arXiv preprint arXiv:1702.05374, 2017 - arxiv.org
The aim of this paper is to give an overview of domain adaptation and transfer learning with
a specific view on visual applications. After a general motivation, we first position domain …

Revisiting weak-to-strong consistency in semi-supervised semantic segmentation

L Yang, L Qi, L Feng, W Zhang… - Proceedings of the IEEE …, 2023 - openaccess.thecvf.com
In this work, we revisit the weak-to-strong consistency framework, popularized by FixMatch
from semi-supervised classification, where the prediction of a weakly perturbed image …

Deep long-tailed learning: A survey

Y Zhang, B Kang, B Hooi, S Yan… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Deep long-tailed learning, one of the most challenging problems in visual recognition, aims
to train well-performing deep models from a large number of images that follow a long-tailed …

Multimae: Multi-modal multi-task masked autoencoders

R Bachmann, D Mizrahi, A Atanov, A Zamir - European Conference on …, 2022 - Springer
We propose a pre-training strategy called Multi-modal Multi-task Masked Autoencoders
(MultiMAE). It differs from standard Masked Autoencoding in two key aspects: I) it can …

St++: Make self-training work better for semi-supervised semantic segmentation

L Yang, W Zhuo, L Qi, Y Shi… - Proceedings of the IEEE …, 2022 - openaccess.thecvf.com
Self-training via pseudo labeling is a conventional, simple, and popular pipeline to leverage
unlabeled data. In this work, we first construct a strong baseline of self-training (namely ST) …

Dash: Semi-supervised learning with dynamic thresholding

Y Xu, L Shang, J Ye, Q Qian, YF Li… - International …, 2021 - proceedings.mlr.press
While semi-supervised learning (SSL) has received tremendous attentions in many machine
learning tasks due to its successful use of unlabeled data, existing SSL algorithms use either …

Denoising pretraining for semantic segmentation

EA Brempong, S Kornblith, T Chen… - Proceedings of the …, 2022 - openaccess.thecvf.com
Semantic segmentation labels are expensive and time consuming to acquire. To improve
label efficiency of semantic segmentation models, we revisit denoising autoencoders and …

Rethinking pre-training and self-training

B Zoph, G Ghiasi, TY Lin, Y Cui, H Liu… - Advances in neural …, 2020 - proceedings.neurips.cc
Pre-training is a dominant paradigm in computer vision. For example, supervised ImageNet
pre-training is commonly used to initialize the backbones of object detection and …

Fixmatch: Simplifying semi-supervised learning with consistency and confidence

K Sohn, D Berthelot, N Carlini… - Advances in neural …, 2020 - proceedings.neurips.cc
Semi-supervised learning (SSL) provides an effective means of leveraging unlabeled data
to improve a model's performance. This domain has seen fast progress recently, at the cost …