Comparison of time-frequency representations for environmental sound classification using convolutional neural networks M Huzaifah arXiv preprint arXiv:1706.07156, 2017 | 211 | 2017 |
Deep generative models for musical audio synthesis M Huzaifah, L Wyse Handbook of artificial intelligence for music: foundations, advanced …, 2021 | 28 | 2021 |
Applying visual domain style transfer and texture synthesis techniques to audio: Insights and challenges M Huzaifah, L Wyse Neural Computing and Applications, 1-15, 2019 | 25* | 2019 |
An analysis of semantically-aligned speech-text embeddings M Huzaifah, I Kukanov 2022 IEEE Spoken Language Technology Workshop (SLT), 747-754, 2023 | 6* | 2023 |
MTCRNN: A multi-scale rnn for directed audio texture synthesis M Huzaifah, L Wyse arXiv preprint arXiv:2011.12596, 2020 | 5 | 2020 |
Deep learning models for generating audio textures L Wyse, M Huzaifah Proceedings of the 2020 Joint Conference on Music Creativity, Stockholm, Sweden, 2020 | 4 | 2020 |
Conditioning a Recurrent Neural Network to synthesize musical instrument transients L Wyse, M Huzaifah Sound and Music Computing Conference, 525-529, 2019 | 3 | 2019 |
Audio textures in terms of generative models⋆ L Wyse, M Huzaifah MML 2020, 36, 2020 | 2 | 2020 |
Evaluating Code-Switching Translation with Large Language Models M Huzaifah, W Zheng, N Chanpaisit, K Wu Proceedings of the 2024 Joint International Conference on Computational …, 2024 | 1 | 2024 |
I2R’s End-to-End Speech Translation System for IWSLT 2023 Offline Shared Task M Huzaifah, KM Tan, R Duan Proceedings of the 20th International Conference on Spoken Language …, 2023 | 1 | 2023 |
Directed Audio Texture Synthesis with Deep Learning M Huzaifah National University of Singapore, 2020 | | 2020 |