作者
Matthew Shreve, Sridhar Godavarthy, Dmitry Goldgof, Sudeep Sarkar
发表日期
2011/3/21
研讨会论文
2011 IEEE international conference on automatic face & gesture recognition (FG)
页码范围
51-56
出版商
IEEE
简介
We propose a method for the automatic spotting (temporal segmentation) of facial expressions in long videos comprising of macro- and micro-expressions. The method utilizes the strain impacted on the facial skin due to the non-rigid motion caused during expressions. The strain magnitude is calculated using the central difference method over the robust and dense optical flow field observed in several regions (chin, mouth, cheek, forehead) on each subject's face. This new approach is able to successfully detect and distinguish between large expressions (macro) and rapid and localized expressions (micro). Extensive testing was completed on a dataset containing 181 macro-expressions and 124 micro-expressions. The dataset consists of 56 videos collected at USF, 6 videos from the Canal-9 political debates, and 3 low quality videos found on the internet. A spotting accuracy of 85% was achieved for macro …
引用总数
201120122013201420152016201720182019202020212022202320242492114232422222833282417
学术搜索中的文章
M Shreve, S Godavarthy, D Goldgof, S Sarkar - 2011 IEEE international conference on automatic face …, 2011