Fast region proposal learning for object detection for robotics

F Ceola, E Maiettini, G Pasquale, L Rosasco… - arXiv preprint arXiv …, 2020 - arxiv.org
arXiv preprint arXiv:2011.12790, 2020arxiv.org
Object detection is a fundamental task for robots to operate in unstructured environments.
Today, there are several deep learning algorithms that solve this task with remarkable
performance. Unfortunately, training such systems requires several hours of GPU time. For
robots, to successfully adapt to changes in the environment or learning new objects, it is also
important that object detectors can be re-trained in a short amount of time. A recent method
[1] proposes an architecture that leverages on the powerful representation of deep learning …
Object detection is a fundamental task for robots to operate in unstructured environments. Today, there are several deep learning algorithms that solve this task with remarkable performance. Unfortunately, training such systems requires several hours of GPU time. For robots, to successfully adapt to changes in the environment or learning new objects, it is also important that object detectors can be re-trained in a short amount of time. A recent method [1] proposes an architecture that leverages on the powerful representation of deep learning descriptors, while permitting fast adaptation time. Leveraging on the natural decomposition of the task in (i) regions candidate generation, (ii) feature extraction and (iii) regions classification, this method performs fast adaptation of the detector, by only re-training the classification layer. This shortens training time while maintaining state-of-the-art performance. In this paper, we firstly demonstrate that a further boost in accuracy can be obtained by adapting, in addition, the regions candidate generation on the task at hand. Secondly, we extend the object detection system presented in [1] with the proposed fast learning approach, showing experimental evidence on the improvement provided in terms of speed and accuracy on two different robotics datasets. The code to reproduce the experiments is publicly available on GitHub.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果

Google学术搜索按钮

example.edu/paper.pdf
搜索
获取 PDF 文件
引用
References