Recurrence without recurrence: Stable video landmark detection with deep equilibrium models

P Micaelli, A Vahdat, H Yin, J Kautz… - Proceedings of the …, 2023 - openaccess.thecvf.com
Proceedings of the IEEE/CVF conference on computer vision and …, 2023openaccess.thecvf.com
Cascaded computation, whereby predictions are recurrently refined over several stages, has
been a persistent theme throughout the development of landmark detection models. In this
work, we show that the recently proposed Deep Equilibrium Model (DEQ) can be naturally
adapted to this form of computation. Our Landmark DEQ (LDEQ) achieves state-of-the-art
performance on the challenging WFLW facial landmark dataset, reaching 3.92 NME with
fewer parameters and a training memory cost of O (1) in the number of recurrent modules …
Abstract
Cascaded computation, whereby predictions are recurrently refined over several stages, has been a persistent theme throughout the development of landmark detection models. In this work, we show that the recently proposed Deep Equilibrium Model (DEQ) can be naturally adapted to this form of computation. Our Landmark DEQ (LDEQ) achieves state-of-the-art performance on the challenging WFLW facial landmark dataset, reaching 3.92 NME with fewer parameters and a training memory cost of O (1) in the number of recurrent modules. Furthermore, we show that DEQs are particularly suited for landmark detection in videos. In this setting, it is typical to train on still images due to the lack of labelled videos. This can lead to a" flickering" effect at inference time on video, whereby a model can rapidly oscillate between different plausible solutions across consecutive frames. By rephrasing DEQs as a constrained optimization, we emulate recurrence at inference time, despite not having access to temporal data at training time. This Recurrence without Recurrence (RwR) paradigm helps in reducing landmark flicker, which we demonstrate by introducing a new metric, normalized mean flicker (NMF), and contributing a new facial landmark video dataset (WFLW-V) targeting landmark uncertainty. On the WFLW-V hard subset made up of 500 videos, our LDEQ with RwR improves the NME and NMF by 10 and 13% respectively, compared to the strongest previously published model using a hand-tuned conventional filter.
openaccess.thecvf.com
以上显示的是最相近的搜索结果。 查看全部搜索结果