MSP-IMPROV: An acted corpus of dyadic interactions to study emotion perception C Busso, S Parthasarathy, A Burmania, M AbdelWahab, N Sadoughi, ... IEEE Transactions on Affective Computing 8 (1), 67-80, 2016 | 358 | 2016 |
Jointly Predicting Arousal, Valence and Dominance with Multi-Task Learning. S Parthasarathy, C Busso Interspeech 2017, 1103-1107, 2017 | 150 | 2017 |
Increasing the reliability of crowdsourcing evaluations using online quality assessment A Burmania, S Parthasarathy, C Busso IEEE Transactions on Affective Computing 7 (4), 374-388, 2015 | 132 | 2015 |
Semi-supervised speech emotion recognition with ladder networks S Parthasarathy, C Busso IEEE/ACM transactions on audio, speech, and language processing 28, 2697-2709, 2020 | 107 | 2020 |
Ladder networks for emotion recognition: Using unsupervised auxiliary tasks to improve predictions of emotional attributes S Parthasarathy, C Busso arXiv preprint arXiv:1804.10816, 2018 | 65 | 2018 |
Training strategies to handle missing modalities for audio-visual expression recognition S Parthasarathy, S Sundaram Companion Publication of the 2020 International Conference on Multimodal …, 2020 | 57 | 2020 |
Self-supervised learning with cross-modal transformers for emotion recognition A Khare, S Parthasarathy, S Sundaram 2021 IEEE Spoken Language Technology Workshop (SLT), 381-388, 2021 | 47 | 2021 |
Multiresolution and multimodal speech recognition with transformers G Paraskevopoulos, S Parthasarathy, A Khare, S Sundaram arXiv preprint arXiv:2004.14840, 2020 | 43 | 2020 |
Convolutional neural network techniques for speech emotion recognition S Parthasarathy, I Tashev 2018 16th international workshop on acoustic signal enhancement (IWAENC …, 2018 | 37 | 2018 |
A study of speaker verification performance with expressive speech S Parthasarathy, C Zhang, JHL Hansen, C Busso Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International …, 2017 | 33 | 2017 |
Using agreement on direction of change to build rank-based emotion classifiers S Parthasarathy, R Cowie, C Busso IEEE/ACM Transactions on Audio, Speech, and Language Processing 24 (11 …, 2016 | 32 | 2016 |
Detecting expressions with multimodal transformers S Parthasarathy, S Sundaram 2021 IEEE Spoken Language Technology Workshop (SLT), 636-643, 2021 | 30 | 2021 |
Ranking emotional attributes with deep neural networks S Parthasarathy, R Lotfian, C Busso Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International …, 2017 | 29 | 2017 |
Role of regularization in the prediction of valence from speech K Sridhar, S Parthasarathy, C Busso Interspeech 2018, 2018 | 26 | 2018 |
Improving emotion classification through variational inference of latent variables S Parthasarathy, V Rozgic, M Sun, C Wang ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and …, 2019 | 23 | 2019 |
Multi-modal embeddings using multi-task learning for emotion recognition A Khare, S Parthasarathy, S Sundaram arXiv preprint arXiv:2009.05019, 2020 | 21 | 2020 |
Predicting speaker recognition reliability by considering emotional content S Parthasarathy, C Busso 2017 seventh international conference on affective computing and intelligent …, 2017 | 20 | 2017 |
Defining emotionally salient regions using qualitative agreement method S Parthasarathy, C Busso Interspeech 2016}, 3598-3602, 2016 | 20 | 2016 |
Predicting emotionally salient regions using qualitative agreement of deep neural network regressors S Parthasarathy, C Busso IEEE Transactions on Affective Computing 12 (2), 402-416, 2018 | 19 | 2018 |
Preference-learning with qualitative agreement for sentence level emotional annotations S Parthasarathy, C Busso Interspeech 2018, 2018 | 17 | 2018 |