A systematic review on affective computing: Emotion models, databases, and recent advances

Y Wang, W Song, W Tao, A Liotta, D Yang, X Li, S Gao… - Information …, 2022 - Elsevier
Affective computing conjoins the research topics of emotion recognition and sentiment
analysis, and can be realized with unimodal or multimodal data, consisting primarily of …

[HTML][HTML] A systematic survey on multimodal emotion recognition using learning algorithms

N Ahmed, Z Al Aghbari, S Girija - Intelligent Systems with Applications, 2023 - Elsevier
Emotion recognition is the process to detect, evaluate, interpret, and respond to people's
emotional states and emotions, ranging from happiness to fear to humiliation. The COVID-19 …

Bi-bimodal modality fusion for correlation-controlled multimodal sentiment analysis

W Han, H Chen, A Gelbukh, A Zadeh… - Proceedings of the …, 2021 - dl.acm.org
Multimodal sentiment analysis aims to extract and integrate semantic information collected
from multiple modalities to recognize the expressed emotions and sentiment in multimodal …

Speech emotion recognition using self-supervised features

E Morais, R Hoory, W Zhu, I Gat… - ICASSP 2022-2022 …, 2022 - ieeexplore.ieee.org
Self-supervised pre-trained features have consistently delivered state-of-art results in the
field of natural language processing (NLP); however, their merits in the field of speech …

Emotions don't lie: An audio-visual deepfake detection method using affective cues

T Mittal, U Bhattacharya, R Chandra, A Bera… - Proceedings of the 28th …, 2020 - dl.acm.org
We present a learning-based method for detecting real and fake deepfake multimedia
content. To maximize information for learning, we extract and analyze the similarity between …

Deep learning-based multimodal emotion recognition from audio, visual, and text modalities: A systematic review of recent advancements and future prospects

S Zhang, Y Yang, C Chen, X Zhang, Q Leng… - Expert Systems with …, 2023 - Elsevier
Emotion recognition has recently attracted extensive interest due to its significant
applications to human-computer interaction. The expression of human emotion depends on …

Former-dfer: Dynamic facial expression recognition transformer

Z Zhao, Q Liu - Proceedings of the 29th ACM International Conference …, 2021 - dl.acm.org
This paper proposes a dynamic facial expression recognition transformer (Former-DFER) for
the in-the-wild scenario. Specifically, the proposed Former-DFER mainly consists of a …

Mart: Masked affective representation learning via masked temporal distribution distillation

Z Zhang, P Zhao, E Park… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Limited training data is a long-standing problem for video emotion analysis (VEA). Existing
works leverage the power of large-scale image datasets for transferring while failing to …

Target and source modality co-reinforcement for emotion understanding from asynchronous multimodal sequences

D Yang, Y Liu, C Huang, M Li, X Zhao, Y Wang… - Knowledge-Based …, 2023 - Elsevier
Perceiving human emotions from a multimodal perspective has received significant attention
in knowledge engineering communities. Due to the variable receiving frequency for …

Exploiting BERT for multimodal target sentiment classification through input space translation

Z Khan, Y Fu - Proceedings of the 29th ACM international conference …, 2021 - dl.acm.org
Multimodal target/aspect sentiment classification combines multimodal sentiment analysis
and aspect/target sentiment classification. The goal of the task is to combine vision and …