Z Liu, J Li, L Ye, G Sun, L Shen - IEEE transactions on circuits …, 2016 - ieeexplore.ieee.org
This paper proposes an effective spatiotemporal saliency model for unconstrained videos with complicated motion and complex scenes. First, superpixel-level motion and color …
Video summarization aims to generate a compact summary of the original video for efficient video browsing. To provide video summaries which are consistent with the human …
In this paper, we introduce a novel approach to identify salient object regions in videos via object proposals. The core idea is to solve the saliency detection problem by ranking and …
Y Li, S Li, C Chen, A Hao, H Qin - IEEE Transactions on …, 2019 - ieeexplore.ieee.org
Conventional video saliency detection methods frequently follow the common bottom-up thread to estimate video saliency within the short-term fashion. As a result, such methods …
Video saliency detection aims to pop out the most salient regions in every frame of a video. Up to now, many efforts have been made from various aspects for video saliency detection …
X Zhou, W Cao, H Gao, Z Ming, J Zhang - Information Sciences, 2023 - Elsevier
Image saliency detection, to which much effort has been devoted in recent years, has advanced significantly. In contrast, the community has paid little attention to video saliency …
KJ Hsu, CC Tsai, YY Lin, X Qian… - Proceedings of the …, 2018 - openaccess.thecvf.com
In this paper, we address co-saliency detection in a set of images jointly covering objects of a specific class by an unsupervised convolutional neural network (CNN). Our method does …
Y Xiao, H Wang, W Xu - Pattern Recognition Letters, 2017 - Elsevier
One-class SVM (OCSVM) is widely adopted in one-class classification (OCC) fields. However, outliers in the training set negatively influence the classification surface of …
L Maczyta, P Bouthemy, O Le Meur - Pattern Recognition Letters, 2019 - Elsevier
The problem addressed in this paper appertains to the domain of motion saliency in videos. However this is a new problem since we aim to extract the temporal segments of the video …