Tsception: a deep learning framework for emotion detection using EEG

Y Ding, N Robinson, Q Zeng, D Chen… - … joint conference on …, 2020 - ieeexplore.ieee.org
2020 international joint conference on neural networks (IJCNN), 2020ieeexplore.ieee.org
In this paper, we propose a deep learning framework, TSception, for emotion detection from
electroencephalogram (EEG). TSception consists of temporal and spatial convolutional
layers, which learn discriminative representations in the time and channel domains
simultaneously. The temporal learner consists of multi-scale 1D convolutional kernels
whose lengths are related to the sampling rate of the EEG signal, which learns multiple
temporal and frequency representations. The spatial learner takes advantage of the …
In this paper, we propose a deep learning framework, TSception, for emotion detection from electroencephalogram (EEG). TSception consists of temporal and spatial convolutional layers, which learn discriminative representations in the time and channel domains simultaneously. The temporal learner consists of multi-scale 1D convolutional kernels whose lengths are related to the sampling rate of the EEG signal, which learns multiple temporal and frequency representations. The spatial learner takes advantage of the asymmetry property of emotion responses at the frontal brain area to learn the discriminative representations from the left and right hemispheres of the brain. In our study, a system is designed to study the emotional arousal in an immersive virtual reality (VR) environment. EEG data were collected from 18 healthy subjects using this system to evaluate the performance of the proposed deep learning network for the classification of low and high emotional arousal states. The proposed method is compared with SVM, EEGNet, and LSTM. TSception achieves a high classification accuracy of 86.03%, which outperforms the prior methods significantly (p<; 0.05).
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果