A survey of deep active learning

P Ren, Y Xiao, X Chang, PY Huang, Z Li… - ACM computing …, 2021 - dl.acm.org
Active learning (AL) attempts to maximize a model's performance gain while annotating the
fewest samples possible. Deep learning (DL) is greedy for data and requires a large amount …

A survey on active deep learning: from model driven to data driven

P Liu, L Wang, R Ranjan, G He, L Zhao - ACM Computing Surveys …, 2022 - dl.acm.org
Which samples should be labelled in a large dataset is one of the most important problems
for the training of deep learning. So far, a variety of active sample selection strategies related …

Batch active learning at scale

G Citovsky, G DeSalvo, C Gentile… - Advances in …, 2021 - proceedings.neurips.cc
The ability to train complex and highly effective models often requires an abundance of
training data, which can easily become a bottleneck in cost, time, and computational …

Active learning for domain adaptation: An energy-based approach

B Xie, L Yuan, S Li, CH Liu, X Cheng… - Proceedings of the AAAI …, 2022 - ojs.aaai.org
Unsupervised domain adaptation has recently emerged as an effective paradigm for
generalizing deep neural networks to new target domains. However, there is still enormous …

A comparative survey of deep active learning

X Zhan, Q Wang, K Huang, H Xiong, D Dou… - arXiv preprint arXiv …, 2022 - arxiv.org
While deep learning (DL) is data-hungry and usually relies on extensive labeled data to
deliver good performance, Active Learning (AL) reduces labeling costs by selecting a small …

Active learning on a budget: Opposite strategies suit high and low budgets

G Hacohen, A Dekel, D Weinshall - arXiv preprint arXiv:2202.02794, 2022 - arxiv.org
Investigating active learning, we focus on the relation between the number of labeled
examples (budget size), and suitable querying strategies. Our theoretical analysis shows a …

[HTML][HTML] A wholistic view of continual learning with deep neural networks: Forgotten lessons and the bridge to active and open world learning

M Mundt, Y Hong, I Pliushch, V Ramesh - Neural Networks, 2023 - Elsevier
Current deep learning methods are regarded as favorable if they empirically perform well on
dedicated test sets. This mentality is seamlessly reflected in the resurfacing area of continual …

Active learning through a covering lens

O Yehuda, A Dekel, G Hacohen… - Advances in Neural …, 2022 - proceedings.neurips.cc
Deep active learning aims to reduce the annotation cost for the training of deep models,
which is notoriously data-hungry. Until recently, deep active learning methods were …

Dirichlet-based uncertainty calibration for active domain adaptation

M Xie, S Li, R Zhang, CH Liu - arXiv preprint arXiv:2302.13824, 2023 - arxiv.org
Active domain adaptation (DA) aims to maximally boost the model adaptation on a new
target domain by actively selecting limited target data to annotate, whereas traditional active …

On the opportunities of green computing: A survey

Y Zhou, X Lin, X Zhang, M Wang, G Jiang, H Lu… - arXiv preprint arXiv …, 2023 - arxiv.org
Artificial Intelligence (AI) has achieved significant advancements in technology and research
with the development over several decades, and is widely used in many areas including …