Hanayo: Harnessing Wave-like Pipeline Parallelism for Enhanced Large Model Training Efficiency Z Liu, S Cheng, H Zhou, Y You SC '23: Proceedings of the International Conference for High Performance …, 2023 | 16 | 2023 |
EnergonAI: An inference system for 10-100 billion parameter transformer models J Du, Z Liu, J Fang, S Li, Y Li, Y Lu, Y You arXiv preprint arXiv:2209.02341, 2022 | 3 | 2022 |
DSP: Dynamic Sequence Parallelism for Multi-Dimensional Transformers X Zhao, S Cheng, Z Zheng, Z Yang, Z Liu, Y You arXiv preprint arXiv:2403.10266, 2024 | 2 | 2024 |
Wallfacer: Guiding transformer model training out of the long-context dark forest with n-body problem Z Liu, S Wang, S Cheng, Z Zhao, Y Bai, X Zhao, J Demmel, Y You arXiv preprint arXiv:2407.00611, 2024 | 1 | 2024 |
HeteGen: Efficient Heterogeneous Parallel Inference for Large Language Models on Resource-Constrained Devices Z XUANLEI, B Jia, H Zhou, Z Liu, S Cheng, Y You MLSys 2024, Proceedings of Machine Learning and Systems 6, 162-172, 2024 | 1 | 2024 |
AutoChunk: Automated Activation Chunk for Memory-Efficient Long Sequence Inference X Zhao, S Cheng, G Lu, J Fang, H Zhou, B Jia, Z Liu, Y You Proceedings of the 12th International Conference on Learning Representations, 2024 | 1 | 2024 |
ATP: Adaptive Tensor Parallelism for Foundation Models S Cheng, Z Liu, J Du, Y You arXiv preprint arXiv:2301.08658, 2023 | 1 | 2023 |