Deep models, eg, CNNs and Vision Transformers, have achieved impressive achievements in many vision tasks in the closed world. However, novel classes emerge from time to time in …
Z Wang, E Yang, L Shen, H Huang - arXiv preprint arXiv:2307.09218, 2023 - arxiv.org
Forgetting refers to the loss or deterioration of previously acquired information or knowledge. While the existing surveys on forgetting have primarily focused on continual learning …
J Gao, J Zhang, X Liu, T Darrell… - Proceedings of the …, 2023 - openaccess.thecvf.com
Test-time adaptation harnesses test inputs to improve the accuracy of a model trained on source data when tested on shifted target data. Most methods update the source model by …
This paper presents a simple yet effective approach that improves continual test-time adaptation (TTA) in a memory-efficient manner. TTA may primarily be conducted on edge …
Test-time adaptation (TTA) methods, which generally rely on the model's predictions (eg, entropy minimization) to adapt the source pretrained model to the unlabeled target domain …
Test-Time Adaptation (TTA) has recently emerged as a promising approach for tackling the robustness challenge under distribution shifts. However, the lack of consistent settings and …
Z Deng, Z Chen, S Niu, T Li… - Advances in Neural …, 2023 - proceedings.neurips.cc
Image super-resolution (SR) aims to learn a mapping from low-resolution (LR) to high- resolution (HR) using paired HR-LR training images. Conventional SR methods typically …
Y Li, X Xu, Y Su, K Jia - Proceedings of the IEEE/CVF …, 2023 - openaccess.thecvf.com
Generalizing deep learning models to unknown target domain distribution with low latency has motivated research into test-time training/adaptation (TTT/TTA). Existing approaches …
Z Zhou, LZ Guo, LH Jia, D Zhang… - … Conference on Machine …, 2023 - proceedings.mlr.press
Test-time adaptation (TTA) adapts a source model to the distribution shift in testing data without using any source data. There have been plenty of algorithms concentrated on …