Skip to the content.
LLM4HPC 2026

LLM4HPC 2026

The 2nd International Workshop on Foundational Large Language Models Advances for HPC

to be held in conjunction with
ISC-HPC 2026



26 June, 2026
Hamburg, Germany

ISC-HPC 2026

Introduction

Since their development and release, modern Large Language Models (LLMs), such as the Generative Pre-trained Transformer (GPT) model and the Large Language Model Meta AI (LLaMA), have come to signify a revolution in human-computer interaction spurred on by their high-quality results. LLMs have repaved this landscape thanks to unprecedented investments and enormous training models (hundreds of billions of parameters). The availability of LLMs has led to increasing interest in how they could be applied to a large variety of applications. The HPC community made recent research efforts to evaluate current LLM capabilities for some HPC tasks, including code generation, auto parallelization, performance portability, correctness, among others. All these studies concluded that state-of-the-art LLM capabilities have proven so far insufficient for these targets. Hence, it is necessary to explore novel techniques to further empower LLMs to enrich the HPC mission and its impact.

Call For Papers

Objectives, scope and topics of the workshop

This workshop objectives are focused on LLMs advances for any HPC major priority and challenge with the aims to define and discuss the fundamentals of LLMs for HPC-specific tasks, including but not limited to hardware design, compilation, parallel programming models and runtimes, application development, enabling LLM technologies to have more autonomous decision-making about the efficient use of HPC. This workshop aims to provide a forum to discuss new and emerging solutions to address these important challenges towards an AI-assisted HPC era. Papers are being sought on many aspects of LLM for HPC targets including (but not limited to):

Program

9,00AM-9,10AM: Opening, Pedro Valero-Lara
9,10AM-9,40AM: Keynote: Exploring Multi-Agent Systems for HPC Code Development, Daichi Mukunoki
9,40AM-10,00AM: Talk 1: LLMs Evaluation for Fortran HPC Code Generation and Translation
10,00AM-10,20AM: Talk 2: Improving HPC Code Generation Capability of LLMs via Online Reinforcement Learning with Real-Machine Benchmark Rewards
10,20AM-10,40AM: Talk 3: Impact of Inference-Time Reasoning on LLM-Based Parallel Code Generation: a Case-Study of Kokkos
10,40AM-11,00AM: Break
11,00AM-11,30PM: Invited talk: Leveraging LLMs Across the HPC Software Stack, Siavash Ghiasvand
11,30AM-11,50AM: Talk 4: AMReX Agent: Framework-Aware LLM Automation for Multiphysics Simulations
11,50AM-12,10PM: Talk 5: Multi-Agent Orchestration for High-Throughput Materials Screening on a Leadership-Class System
12,10PM-12,40PM: Talk 6: AstraAI: LLMs, Retrieval, and AST-Guided Assistance for HPC Codebases
12,40PM-12,50PM: Best Paper Award
12,50PM-1,00PM: Closing

Important Dates

Paper submission deadline : March 13, 2026
Notification of acceptance : March 27, 2026
Camera-ready papers due : May 15, 2026
Workshop day: June 26, 2026

Schedule

June 26, 2026 : 9,00AM - 1,00PM
Hall X4 - 1st Floor

Steering Committee

Jeffrey S. Vetter, Oak Ridge National Laboratory, USA

Franz Franchetti, Carnegie Mellon University, USA

Abhinav Bhatele, University of Maryland, USA

Organizers (Contact us)

Pedro Valero-Lara (chair)
Oak Ridge National Laboratory, USA
valerolarap@ornl.gov

Simon Garcia de Gonzalo (co-chair)
Sandia National Laboratory, USA
simgarc@sandia.gov

Ignacio Laguna (co-chair)
Lawrence Livermore National Laboratory, USA
lagunaperalt1@llnl.gov

Upasana Sridhar (co-chair)
Carnegie Mellon University, USA
upasanas@andrew.cmu.edu

Programme Committee

Manuscript submission

We invite submissions of original, unpublished research and experiential papers. Papers should be between 6 to 12 pages in length (including a bibliography and appendices, with two possible extra pages after the review to address the reviewer’s comments), formatted according to Springer’s Lecture Notes in Computer Science (LNCS). All paper submissions will be managed electronically via ISC-HPC Linklings.

Proceedings

All accepted papers will be published in the ISC-HPC Workshops 2025 proceedings by SpringerLink.

Best Paper Award

The Best Paper Award will be selected on the basis of explicit recommendations of the reviewers and their scoring towards the paper’s originality and quality.

Keynote Speaker (Daichi Mukunoki, Nagoya University):

Exploring Multi-Agent Systems for HPC Code Development

Recent advances in code generation AI have demonstrated remarkable potential to transform software development. However, applying these technologies to the high-performance computing (HPC) domain remains challenging. HPC code requires not only functional correctness but also a range of additional considerations, such as architecture-specific performance optimization, support for GPUs and Fortran, appropriate algorithm selection tailored to the target environment, and careful control of numerical accuracy. At the Information Technology Center of Nagoya University, we are advancing HPC-GENIE, a research and development project focused on applying generative AI to HPC code development. Rather than developing new models, we concentrate on designing AI agents built on existing models. In particular, we explore the potential of multi-agent systems in which multiple specialized agents collaborate to address complex HPC development tasks. We are also developing lightweight systems that operate entirely in local environments without relying on commercial services. In this talk, we discuss the current challenges and future prospects of AI-driven agents for HPC code development.

Daichi Mukunoki is an Assistant Professor at the Information Technology Center, Nagoya University. He held research positions at the RIKEN Center for Computational Science from 2014 to 2023, serving as a Postdoctoral Researcher and later as a Research Scientist. He was also a Postdoctoral Research Fellow at Tokyo Woman’s Christian University and a JSPS Research Fellow at the University of Tsukuba, where he received his Ph.D. in Engineering in 2013. His research interests include GPU computing, numerical computing, mixed-precision algorithms, computer arithmetic, and code-generative AI.

Invited Speaker (Siavash Ghiasvand, ScaDS.AI Dresden/Leipzig):

Leveraging LLMs Across the HPC Software Stack

Recent advances in generative AI are transforming how users interact with, manage, and optimize performance in high-performance computing (HPC) environments. This talk examines how integrating AI-assisted tools and methodologies can elevate the HPC user experience—from adaptive resource allocation and automated job optimization to natural language interfaces that simplify complex system interactions.

Dr. Siavash Ghiasvand, Senior Researcher at ScaDS.AI Dresden/Leipzig, focuses on the integration of High-Performance Computing (HPC) and Artificial Intelligence. His research in performance engineering and system resiliency aims to optimize large-scale infrastructures for complex AI workloads. He also leads efforts to democratize supercomputing by leveraging modern AI techniques—such as Large Language Models (LLMs) and generative AI—to reduce technical barriers and make HPC more accessible to a broader community of users.

Registration

Information about registration at ISC-HPC 2025 website.