关注
M Huzaifah
M Huzaifah
Agency for Science, Technology and Research
在 i2r.a-star.edu.sg 的电子邮件经过验证
标题
引用次数
引用次数
年份
Comparison of time-frequency representations for environmental sound classification using convolutional neural networks
M Huzaifah
arXiv preprint arXiv:1706.07156, 2017
2112017
Deep generative models for musical audio synthesis
M Huzaifah, L Wyse
Handbook of artificial intelligence for music: foundations, advanced …, 2021
282021
Applying visual domain style transfer and texture synthesis techniques to audio: Insights and challenges
M Huzaifah, L Wyse
Neural Computing and Applications, 1-15, 2019
25*2019
An analysis of semantically-aligned speech-text embeddings
M Huzaifah, I Kukanov
2022 IEEE Spoken Language Technology Workshop (SLT), 747-754, 2023
6*2023
MTCRNN: A multi-scale rnn for directed audio texture synthesis
M Huzaifah, L Wyse
arXiv preprint arXiv:2011.12596, 2020
52020
Deep learning models for generating audio textures
L Wyse, M Huzaifah
Proceedings of the 2020 Joint Conference on Music Creativity, Stockholm, Sweden, 2020
42020
Conditioning a Recurrent Neural Network to synthesize musical instrument transients
L Wyse, M Huzaifah
Sound and Music Computing Conference, 525-529, 2019
32019
Audio textures in terms of generative models⋆
L Wyse, M Huzaifah
MML 2020, 36, 2020
22020
Evaluating Code-Switching Translation with Large Language Models
M Huzaifah, W Zheng, N Chanpaisit, K Wu
Proceedings of the 2024 Joint International Conference on Computational …, 2024
12024
I2R’s End-to-End Speech Translation System for IWSLT 2023 Offline Shared Task
M Huzaifah, KM Tan, R Duan
Proceedings of the 20th International Conference on Spoken Language …, 2023
12023
Directed Audio Texture Synthesis with Deep Learning
M Huzaifah
National University of Singapore, 2020
2020
系统目前无法执行此操作,请稍后再试。
文章 1–11