Torchy: A Tracing JIT Compiler for PyTorch

NP Lopes - Proceedings of the 32nd ACM SIGPLAN International …, 2023 - dl.acm.org
Machine learning (ML) models keep getting larger and more complex. Whereas before
models used to be represented by static data-flow graphs, they are now implemented via …

A deep learning dataloader with shared data preparation

J Xu, G Wang, Y Yao, Z Li, C Cao… - Advances in Neural …, 2022 - proceedings.neurips.cc
Executing a family of Deep Neural Networks (DNNs) training jobs on the same or similar
datasets in parallel is typical in current deep learning scenarios. It is time-consuming and …

ACROBAT: Optimizing Auto-batching of Dynamic Deep Learning at Compile Time

P Fegade, T Chen, P Gibbons… - Proceedings of Machine …, 2024 - proceedings.mlsys.org
Dynamic control flow is an important technique often used to design expressive and efficient
deep learning computations for applications such as text parsing, machine translation …

[PDF][PDF] MAGPY: compiling eager mode DNN programs by monitoring execution states

C Zhang, R Dong, H Wang, R Zhong… - Proceedings of the …, 2024 - heheda12345.github.io
Real-world deep learning programs are often developed with dynamic programming
languages like Python, which usually have complex features, such as built-in functions and …

[PDF][PDF] Auto-batching Techniques for Dynamic Deep Learning Computation

P Fegade - 2022 - reports-archive.adm.cs.cmu.edu
Deep learning has increasingly begun to be used across a wide range of computing
applications. Dynamism—the property where the execution of a computation differs in some …

[PDF][PDF] Torchy: A Tracing JIT Compiler for PyTorch (Extended Version)

NP LOPES - ist.utl.pt
Although eager-mode frameworks are more convenient, they are less efficient today as
operations are dispatched to the hardware one at a time. This execution model precludes …