Deep learning-based electroencephalography analysis: a systematic review

Y Roy, H Banville, I Albuquerque… - Journal of neural …, 2019 - iopscience.iop.org
Context. Electroencephalography (EEG) is a complex signal and can require several years
of training, as well as advanced signal processing and feature extraction methodologies to …

Deep learning for visual tracking: A comprehensive survey

SM Marvasti-Zadeh, L Cheng… - IEEE Transactions …, 2021 - ieeexplore.ieee.org
Visual target tracking is one of the most sought-after yet challenging research topics in
computer vision. Given the ill-posed nature of the problem and its popularity in a broad …

Deep model reassembly

X Yang, D Zhou, S Liu, J Ye… - Advances in neural …, 2022 - proceedings.neurips.cc
In this paper, we explore a novel knowledge-transfer task, termed as Deep Model
Reassembly (DeRy), for general-purpose model reuse. Given a collection of heterogeneous …

Fine-tuning can distort pretrained features and underperform out-of-distribution

A Kumar, A Raghunathan, R Jones, T Ma… - arXiv preprint arXiv …, 2022 - arxiv.org
When transferring a pretrained model to a downstream task, two popular methods are full
fine-tuning (updating all the model parameters) and linear probing (updating only the last …

Robust fine-tuning of zero-shot models

M Wortsman, G Ilharco, JW Kim, M Li… - Proceedings of the …, 2022 - openaccess.thecvf.com
Large pre-trained models such as CLIP or ALIGN offer consistent accuracy across a range of
data distributions when performing zero-shot inference (ie, without fine-tuning on a specific …

Surgical fine-tuning improves adaptation to distribution shifts

Y Lee, AS Chen, F Tajwar, A Kumar, H Yao… - arXiv preprint arXiv …, 2022 - arxiv.org
A common approach to transfer learning under distribution shift is to fine-tune the last few
layers of a pre-trained model, preserving learned features while also adapting to the new …

Metafscil: A meta-learning approach for few-shot class incremental learning

Z Chi, L Gu, H Liu, Y Wang, Y Yu… - Proceedings of the …, 2022 - openaccess.thecvf.com
In this paper, we tackle the problem of few-shot class incremental learning (FSCIL). FSCIL
aims to incrementally learn new classes with only a few samples in each class. Most existing …

Finetune like you pretrain: Improved finetuning of zero-shot vision models

S Goyal, A Kumar, S Garg, Z Kolter… - Proceedings of the …, 2023 - openaccess.thecvf.com
Finetuning image-text models such as CLIP achieves state-of-the-art accuracies on a variety
of benchmarks. However, recent works (Kumar et al., 2022; Wortsman et al., 2021) have …

Big transfer (bit): General visual representation learning

A Kolesnikov, L Beyer, X Zhai, J Puigcerver… - Computer Vision–ECCV …, 2020 - Springer
Transfer of pre-trained representations improves sample efficiency and simplifies
hyperparameter tuning when training deep neural networks for vision. We revisit the …

Domain generalization by mutual-information regularization with pre-trained models

J Cha, K Lee, S Park, S Chun - European conference on computer vision, 2022 - Springer
Abstract Domain generalization (DG) aims to learn a generalized model to an unseen target
domain using only limited source domains. Previous attempts to DG fail to learn domain …