关注
Hailin Zhang
Hailin Zhang
在 pku.edu.cn 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Retrieval-augmented generation for ai-generated content: A survey
P Zhao, H Zhang, Q Yu, Z Wang, Y Geng, F Fu, L Yang, W Zhang, B Cui
arXiv preprint arXiv:2402.19473, 2024
532024
HET: scaling out huge embedding model training via cache-enabled distributed framework
X Miao, H Zhang, Y Shi, X Nie, Z Yang, Y Tao, B Cui
Proceedings of the VLDB Endowment 15.2 (2021): 312-320., 2021
462021
Galvatron: Efficient Transformer Training over Multiple GPUs Using Automatic Parallelism
X Miao, Y Wang, Y Jiang, C Shi, X Nie, H Zhang, B Cui
Proceedings of the VLDB Endowment 16.3 (2022): 470–479., 2022
382022
HET-GMP: A graph-based system approach to scaling large embedding model training
X Miao, Y Shi, H Zhang, X Zhang, X Nie, Z Yang, B Cui
Proceedings of the 2022 International Conference on Management of Data, 470-480, 2022
162022
Hetu: A highly efficient automatic parallel distributed deep learning system
X Miao, X Nie, H Zhang, T Zhao, B Cui
Science China. Information Sciences 66 (1), 117101, 2023
132023
Model-enhanced vector index
H Zhang, Y Wang, Q Chen, R Chang, T Zhang, Z Miao, Y Hou, Y Ding, ...
Advances in Neural Information Processing Systems 36, 2024
122024
CAFE: Towards Compact, Adaptive, and Fast Embedding for Large-scale Recommendation Models
H Zhang, Z Liu, B Chen, Y Zhao, T Zhao, T Yang, B Cui
Proceedings of the ACM on Management of Data 2 (1), 1-28, 2024
42024
Experimental analysis of large-scale learnable vector storage compression
H Zhang, P Zhao, X Miao, Y Shao, Z Liu, T Yang, B Cui
Proceedings of the VLDB Endowment 17.4 (2023): 808–822., 2023
42023
Surge Phenomenon in Optimal Learning Rate and Batch Size Scaling
S Li, P Zhao, H Zhang, X Sun, H Wu, D Jiao, W Wang, C Liu, Z Fang, ...
arXiv preprint arXiv:2405.14578, 2024
12024
Efficiently Training 7B LLM with 1 Million Sequence Length on 8 GPUs
P Zhao, H Zhang, F Fu, X Nie, Q Liu, F Yang, Y Peng, D Jiao, S Li, J Xue, ...
arXiv preprint arXiv:2407.12117, 2024
2024
PQCache: Product Quantization-based KVCache for Long Context LLM Inference
H Zhang, X Ji, Y Chen, F Fu, X Miao, X Nie, W Chen, B Cui
arXiv preprint arXiv:2407.12820, 2024
2024
A Unified Framework for Mining Batch and Periodic Batch in Data Streams
Z Liu, X Wang, Y Wu, T Yang, K Yang, H Zhang, Y Tu, B Cui
IEEE Transactions on Knowledge and Data Engineering, 2024
2024
系统目前无法执行此操作,请稍后再试。
文章 1–12