MLG2023
  • Home
  • Venue
  • Important Dates
  • Information
    • Call for Papers
    • Program
  • Committees
    • Organizing Committee
    • Steering Committee
    • Program Committee
  • Contact

Call for Papers

The intent of this workshop is to bring together researchers, practitioners, and scientific communities to discuss methods that utilize extreme scale systems for learning graph data. This workshop will focus on the greatest challenges in utilizing High Performance Computing (HPC) for machine learning with graphs and methods for exploiting extreme scale parallelism for data, computation, and model optimization. We invite researchers and practitioners to participate in this workshop to discuss the challenges in using HPC for machine learning with graphs and to share the wide range of applications that would benefit from HPC powered machine learning with graphs.

In recent years, the models and data available for machine learning (ML) applications have grown dramatically. High performance computing (HPC) offers the opportunity to accelerate performance and deepen understanding of large data sets through machine learning. Current literature and public implementations focus on either cloud-­‐based or small-­‐scale GPU environments. These implementations do not scale well in HPC environments due to inefficient data movement and network communication within the compute cluster, originating from the significant disparity in the level of parallelism. Additionally, applying machine learning to extreme scale scientific data is largely unexplored. To leverage HPC for ML applications, serious advances will be required in both algorithms and their scalable, parallel implementations.

We invite researchers and practitioners to participate in this workshop to discuss the challenges in using HPC and next generation systems for machine learning with graphs and to share the wide range of applications that would benefit from HPC powered graph machine learning, including knowledge graphs, natural language processing.

Topics will include but will not be limited to:

  • Machine learning models for graphs, such as deep learning, graph neural network, and transformers, for extreme scale systems
  • Enhancing applicability of machine learning in HPC (e.g. feature engineering, usability)
  • Challenges in using large scale knowledge graphs for domain problems (e.g., building a graph, knowledge graph completion, and visual analytics.)
  • Graph representation learning and their applications (e.g., graph similarity)
  • Science and engineering applications of machine learning with graphs such as computational chemistry, computational biology, healthcare, and natural language processing.
  • Learning large models/optimizing hyper parameters for graph data (e.g. deep learning, representation learning)
  • Training machine learning models on large graph datasets and scientific data
  • Overcoming the problems inherent to large datasets through graph-based approaches (e.g. noisy labels, missing data, scalable ingest)
  • Applications of graph machine learning utilizing HPC
  • Future research challenges for graph machine learning in next generation systems (e.g., neuromorphic computing and quantum computing).
  • Large scale graph machine learning applications

Authors are invited to submit full papers with unpublished, original work of 6-10 two-column pages (U.S. letter – 8.5″ x 11″), excluding the bibliography, using the IEEE proceedings template. The IEEE conference proceeding templates for LaTeX and MS Word provided by IEEE eXpress Conference Publishing are available for download. See the templates here. Submissions will be subject to a double blind peer review process. Submissions will be selected to include both application focused work utilizing Graph ML for HPC and novel methods enabling Graph ML on HPC. In support of the SC reproducibilty initiative, we also encourage authors to include reproduciblity appendices: https://sc24.supercomputing.org/submit/transparency-reproducibility-initiative/ Submitted papers will be peer-reviewed and accepted papers and accepted papers (subject to post-review revisions) will be peer-reviesed and accpted papers will be published in the ACM Digital Library. Papers will be submitted through the main SC submissions page https://submissions.supercomputing.org.

AI-Generated Text

The use of artificial intelligence AI–generated text in an article shall be disclosed in the acknowledgements section of any paper submitted to SC. The sections of the paper that use AI-generated text shall have a citation to the AI system used to generate the text. Utilizing Large Language Models (LLMs) is permissible as a general-purpose writing assistance tool. Authors are expected to acknowledge their complete accountability for the contents of their papers, including content generated by LLMs that could be interpreted as plagiarism or scientific misconduct (e.g., fabrication of facts). LLMs are not eligible for authorship.

Important Links

Authors are invited to submit full papers with unpublished, original work of 6-10 pages, excluding the bibliography. Submissions will be subject to a double blind peer review process. Submissions will be selected to include both application focused work utilizing Graph ML for HPC and novel methods enabling Graph ML on HPC. All papers should be formatted using the IEEE format. In support of the SC reproducibilty initiative, we also encourage authors to include reproduciblity appendices, but not required: https://sc24.supercomputing.org/submit/transparency-reproducibility-initiative/ Submitted papers will be peer-reviewed and accepted papers (subject to post-review revisions) will be archived in the ACM Digital Library and IEEE Xplore. Papers will be submitted through the main SC submissions page https://submissions.supercomputing.org.

REVIEWS WILL BE DOUBLE BLIND. PLEASE REMOVE AUTHOR NAMES FROM SUBMITTED DOCUMENT!

Papers must be 6-10 pages in length, written in English, and be formatted according to the IEEE format guidelines linked above.

All submissions will be peer-reviewed for correctness, originality, technical strength, significance, quality of presentation, and relevance to the workshop topics of interest, by at least 3 reviewers. Submitted papers may not have appeared in or be under consideration for another workshop, conference or a journal, nor may they be under review or submitted to another forum during the MLG review process.

Submitted papers will be peer-reviewed and accepted papers (subject to post-review revisions) will be archived in the ACM Digital Library and IEEE Xplore. At least one author of an accepted paper must register for and present the paper at the workshop. In-person presentations are highly preferred.

August 12, 2024 – Submission deadline

September 2, 2024 – Notification of Acceptance

Sept 9, 2024 – Camera-ready submission due

November 18, 2024 – Workshop


Contact: Seung-Hwan Lim, lims1 "at" ornl.gov

© 2024 Oak Ridge National Laboratory

Back to top