We address the challenge of sentiment analysis from visual content. In contrast to existing methods which infer sentiment or emotion directly from visual low-level features, we propose …
In this paper we describe our TRECVID 2005 experiments. The UvA-MediaMill team participated in four tasks. For the detection of camera work (runid: A CAM) we investigate the …
The use of image reranking to boost retrieval performance has been found to be successful for simple queries. It is, however, less effective for complex queries due to the widened …
We consider automated detection of events in video without the use of any visual training examples. A common approach is to represent videos as classification scores obtained from …
This paper introduces a novel approach to facilitating image search based on a compact semantic embedding. A novel method is developed to explicitly map concepts and image …
In this paper we summarize our TRECVID 2017 [1] video recognition and retrieval experiments. We participated in three tasks: video search, event detection and video …
Since its invention, the Web has evolved into the largest multimedia repository that has ever existed. This evolution is a direct result of the explosion of user-generated content …
M Soltanian, S Ghaemmaghami - IEEE Transactions on …, 2018 - ieeexplore.ieee.org
This paper is focused on video event recognition based on frame level convolutional neural network (CNN) descriptors. Using transfer learning, the image trained descriptors are …
In this paper we aim to recognize scenes in images without using any scene images as training data. Different from attribute based approaches, we do not carefully select the …