Do RNN and LSTM have long memory? J Zhao, F Huang, J Lv, Y Duan, Z Qin, G Li, G Tian International Conference on Machine Learning, 11365-11375, 2020 | 187 | 2020 |
Encoding recurrence into transformers F Huang, K Lu, CAI Yuxi, Z Qin, Y Fang, G Tian, G Li The Eleventh International Conference on Learning Representations, 2023 | 17 | 2023 |
Compact autoregressive network D Wang, F Huang, J Zhao, G Li, G Tian Proceedings of the AAAI Conference on Artificial Intelligence 34 (04), 6145-6152, 2020 | 3 | 2020 |
SARMA: Scalable low-rank high-dimensional autoregressive moving averages via tensor decomposition F Huang, K Lu, Y Zheng arXiv preprint arXiv:2405.00626, 2024 | 1 | 2024 |
Contrastive Learning on Multimodal Analysis of Electronic Health Records T Cai, F Huang, R Nakada, L Zhang, D Zhou arXiv preprint arXiv:2403.14926, 2024 | 1 | 2024 |
A new measure of model redundancy for compressed convolutional neural networks F Huang, Y Si, Y Zheng, G Li arXiv preprint arXiv:2112.04857, 2021 | 1 | 2021 |
Supervised Factor Modeling for High-Dimensional Linear Time Series F Huang, K Lu, G Li Available at SSRN 4758811, 2023 | | 2023 |
Rethinking Compressed Convolution Neural Network from a Statistical Perspective F Huang, Y Si, G Li | | |