作者
Jingxi Xu, Da Tang, Tony Jebara
发表日期
2021/3/24
期刊
arXiv preprint arXiv:2103.13420
简介
The cost of annotating training data has traditionally been a bottleneck for supervised learning approaches. The problem is further exacerbated when supervised learning is applied to a number of correlated tasks simultaneously since the amount of labels required scales with the number of tasks. To mitigate this concern, we propose an active multitask learning algorithm that achieves knowledge transfer between tasks. The approach forms a so-called committee for each task that jointly makes decisions and directly shares data across similar tasks. Our approach reduces the number of queries needed during training while maintaining high accuracy on test data. Empirical results on benchmark datasets show significant improvements on both accuracy and number of query requests.
引用总数
学术搜索中的文章
J Xu, D Tang, T Jebara - arXiv preprint arXiv:2103.13420, 2021