Learning to parallelize in a shared-memory environment with transformers

R Harel, Y Pinter, G Oren - Proceedings of the 28th ACM SIGPLAN …, 2023 - dl.acm.org
In past years, the world has switched to multi and many core shared memory architectures.
As a result, there is a growing need to utilize these architectures by introducing shared …

Advising openmp parallelization via a graph-based approach with transformers

T Kadosh, N Schneider, N Hasabnis, T Mattson… - … Workshop on OpenMP, 2023 - Springer
There is an ever-present need for shared memory parallelization schemes to exploit the full
potential of multi-core architectures. The most common parallelization API addressing this …

Quantifying openmp: Statistical insights into usage and adoption

T Kadosh, N Hasabnis, T Mattson… - 2023 IEEE High …, 2023 - ieeexplore.ieee.org
In high-performance computing (HPC), the demand for efficient parallel programming
models has grown dramatically since the end of Dennard Scaling and the subsequent move …

Domain-Specific Code Language Models: Unraveling the Potential for HPC Codes and Tasks

T Kadosh, N Hasabnis, VA Vo, N Schneider… - arXiv preprint arXiv …, 2023 - arxiv.org
With easier access to powerful compute resources, there is a growing trend in AI for software
development to develop larger language models (LLMs) to address a variety of …

Scope is all you need: Transforming LLMs for HPC Code

T Kadosh, N Hasabnis, VA Vo, N Schneider… - arXiv preprint arXiv …, 2023 - arxiv.org
With easier access to powerful compute resources, there is a growing trend in the field of AI
for software development to develop larger and larger language models (LLMs) to address a …

Automatic and interactive program parallelization using the Cetus source to source compiler infrastructure v2. 0

A Bhosale, P Barakhshan, MR Rosas, R Eigenmann - Electronics, 2022 - mdpi.com
This paper presents an overview and evaluation of the existing and newly added analysis
and transformation techniques in the Cetus source-to-source compiler infrastructure. Cetus …

Position Paper: The Landscape and Challenges of HPC Research and LLMs

L Chen, NK Ahmed, A Dutta, A Bhattacharjee… - arXiv preprint arXiv …, 2024 - arxiv.org
Recently, language models (LMs), especially large language models (LLMs), have
revolutionized the field of deep learning. Both encoder-decoder models and prompt-based …

PragFormer: Data-driven Parallel Source Code Classification with Transformers

T Kadosh, N Hasabnis, T Mattson, Y Pinter, G Oren - 2023 - researchsquare.com
Multi-core shared memory architectures have become ubiquitous in computing hardware
nowadays. As a result, there is a growing need to fully utilize these architectures by …

MPIrigen: MPI Code Generation through Domain-Specific Language Models

N Schneider, N Hasabnis, VA Vo, T Kadosh… - Proceedings of the …, 2024 - dl.acm.org
The imperative need to scale computation across numerous nodes highlights the
significance of efficient parallel computing, particularly in the realm of Message Passing …

OMPar: Automatic Parallelization with AI-Driven Source-to-Source Compilation

T Kadosh, N Hasabnis, P Soundararajan, VA Vo… - arXiv preprint arXiv …, 2024 - arxiv.org
Manual parallelization of code remains a significant challenge due to the complexities of
modern software systems and the widespread adoption of multi-core architectures. This …