关注
Srinivas Sridharan, Phd
Srinivas Sridharan, Phd
Distinguished Engineer, NVIDIA
在 nvidia.com 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Distributed deep learning using synchronous stochastic gradient descent
D Das, S Avancha, D Mudigere, K Vaidynathan, S Sridharan, D Kalamkar, ...
arXiv preprint arXiv:1602.06709, 2016
2102016
Mixed precision training of convolutional neural networks using integer operations
D Das, N Mellempudi, D Mudigere, D Kalamkar, S Avancha, K Banerjee, ...
arXiv preprint arXiv:1802.00930, 2018
1912018
Deep learning at 15pf: supervised and semi-supervised classification for scientific data
T Kurth, J Zhang, N Satish, E Racah, I Mitliagkas, MMA Patwary, T Malas, ...
Proceedings of the International Conference for High Performance Computing …, 2017
952017
Machine learning accelerator mechanism
A Bleiweiss, A Ramesh, A Mishra, D Marr, J Cook, S Sridharan, ...
US Patent 11,373,088, 2022
942022
Software-hardware co-design for fast and scalable training of deep learning recommendation models
D Mudigere, Y Hao, J Huang, Z Jia, A Tulloch, S Sridharan, X Liu, ...
Proceedings of the 49th Annual International Symposium on Computer …, 2022
842022
Deep learning training in facebook data centers: Design of scale-up and scale-out systems
M Naumov, J Kim, D Mudigere, S Sridharan, X Wang, W Zhao, S Yilmaz, ...
arXiv preprint arXiv:2003.09518, 2020
842020
Abstraction layers for scalable distributed machine learning
DD Kalamkar, K Vaidyanathan, S Sridharan, D Das
US Patent 11,094,029, 2021
692021
Fine-grain compute communication execution for deep learning frameworks
S Sridharan, D Mudigere
US Patent App. 15/869,502, 2018
692018
Communication optimizations for distributed machine learning
S Sridharan, K Vaidyanathan, D Das, C Sakthivel, ME Smorkalov
US Patent 11,270,201, 2022
642022
Enabling efficient multithreaded MPI communication through a library-based implementation of MPI endpoints
S Sridharan, J Dinan, DD Kalamkar
SC'14: Proceedings of the International Conference for High Performance …, 2014
572014
Hardware implemented point to point communication primitives for machine learning
S Sridharan, K Vaidyanathan, D Das
US Patent 11,488,008, 2022
532022
Astra-sim: Enabling sw/hw co-design exploration for distributed dl training platforms
S Rashidi, S Sridharan, S Srinivasan, T Krishna
2020 IEEE International Symposium on Performance Analysis of Systems and …, 2020
512020
Dynamic precision management for integer deep learning primitives
N Mellempudi, D Mudigere, D Das, S Sridharan
US Patent 10,643,297, 2020
482020
Thread migration to improve synchronization performance
S Sridharan, B Keck, R Murphy, S Chandra, P Kogge
Workshop on Operating System Interference in High Performance Applications, 2006
402006
M. khorashadi, P
D Mudigere, Y Hao, J Huang, Z Jia, A Tulloch, S Sridharan, X Liu, ...
Bhattacharya, P. Lapukhov, M. Naumov, L. Qiao, M. Smelyanskiy, B. Jia, and V …, 2021
392021
Enabling compute-communication overlap in distributed deep learning training platforms
S Rashidi, M Denton, S Sridharan, S Srinivasan, A Suresh, J Nie, ...
2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture …, 2021
382021
On scale-out deep learning training for cloud and hpc
S Sridharan, K Vaidyanathan, D Kalamkar, D Das, ME Smorkalov, ...
arXiv preprint arXiv:1801.08030, 2018
352018
High-performance, distributed training of large-scale deep learning recommendation models
D Mudigere, Y Hao, J Huang, A Tulloch, S Sridharan, X Liu, M Ozdal, ...
arXiv preprint arXiv:2104.05158, 2021
332021
Memory in processor: A novel design paradigm for supercomputing architectures
N Venkateswaran, WR Foundation, A Krishnan, SN Kumar, A Shriraman, ...
ACM SIGARCH Computer Architecture News 32 (3), 19-26, 2003
272003
Themis: A network bandwidth-aware collective scheduling policy for distributed training of dl models
S Rashidi, W Won, S Srinivasan, S Sridharan, T Krishna
Proceedings of the 49th Annual International Symposium on Computer …, 2022
262022
系统目前无法执行此操作,请稍后再试。
文章 1–20