Eyeqoe: A novel qoe assessment model for 360-degree videos using ocular behaviors

H Zhu, T Li, C Wang, W Jin, S Murali, M Xiao… - Proceedings of the …, 2022 - dl.acm.org
H Zhu, T Li, C Wang, W Jin, S Murali, M Xiao, D Ye, M Li
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous …, 2022dl.acm.org
As virtual reality (VR) offers an unprecedented experience than any existing multimedia
technologies, VR videos, or called 360-degree videos, have attracted considerable attention
from academia and industry. How to quantify and model end users' perceived quality in
watching 360-degree videos, or called QoE, resides the center for high-quality provisioning
of these multimedia services. In this work, we present EyeQoE, a novel QoE assessment
model for 360-degree videos using ocular behaviors. Unlike prior approaches, which mostly …
As virtual reality (VR) offers an unprecedented experience than any existing multimedia technologies, VR videos, or called 360-degree videos, have attracted considerable attention from academia and industry. How to quantify and model end users' perceived quality in watching 360-degree videos, or called QoE, resides the center for high-quality provisioning of these multimedia services. In this work, we present EyeQoE, a novel QoE assessment model for 360-degree videos using ocular behaviors. Unlike prior approaches, which mostly rely on objective factors, EyeQoE leverages the new ocular sensing modality to comprehensively capture both subjective and objective impact factors for QoE modeling. We propose a novel method that models eye-based cues into graphs and develop a GCN-based classifier to produce QoE assessment by extracting intrinsic features from graph-structured data. We further exploit the Siamese network to eliminate the impact from subjects and visual stimuli heterogeneity. A domain adaptation scheme named MADA is also devised to generalize our model to a vast range of unseen 360-degree videos. Extensive tests are carried out with our collected dataset. Results show that EyeQoE achieves the best prediction accuracy at 92.9%, which outperforms state-of-the-art approaches. As another contribution of this work, we have publicized our dataset on https://github.com/MobiSec-CSE-UTA/EyeQoE_Dataset.git.
ACM Digital Library
以上显示的是最相近的搜索结果。 查看全部搜索结果