P Liu, L Wang, R Ranjan, G He, L Zhao - ACM Computing Surveys …, 2022 - dl.acm.org
Which samples should be labelled in a large dataset is one of the most important problems for the training of deep learning. So far, a variety of active sample selection strategies related …
The ability to train complex and highly effective models often requires an abundance of training data, which can easily become a bottleneck in cost, time, and computational …
Unsupervised domain adaptation has recently emerged as an effective paradigm for generalizing deep neural networks to new target domains. However, there is still enormous …
While deep learning (DL) is data-hungry and usually relies on extensive labeled data to deliver good performance, Active Learning (AL) reduces labeling costs by selecting a small …
Investigating active learning, we focus on the relation between the number of labeled examples (budget size), and suitable querying strategies. Our theoretical analysis shows a …
Current deep learning methods are regarded as favorable if they empirically perform well on dedicated test sets. This mentality is seamlessly reflected in the resurfacing area of continual …
O Yehuda, A Dekel, G Hacohen… - Advances in Neural …, 2022 - proceedings.neurips.cc
Deep active learning aims to reduce the annotation cost for the training of deep models, which is notoriously data-hungry. Until recently, deep active learning methods were …
M Xie, S Li, R Zhang, CH Liu - arXiv preprint arXiv:2302.13824, 2023 - arxiv.org
Active domain adaptation (DA) aims to maximally boost the model adaptation on a new target domain by actively selecting limited target data to annotate, whereas traditional active …
Artificial Intelligence (AI) has achieved significant advancements in technology and research with the development over several decades, and is widely used in many areas including …