PERFOGRAPH: A Numerical Aware Program Graph Representation for
Performance Optimization and Program Analysis
- URL: http://arxiv.org/abs/2306.00210v2
- Date: Thu, 30 Nov 2023 00:31:29 GMT
- Title: PERFOGRAPH: A Numerical Aware Program Graph Representation for
Performance Optimization and Program Analysis
- Authors: Ali TehraniJamsaz, Quazi Ishtiaque Mahmud, Le Chen, Nesreen K. Ahmed,
Ali Jannesari
- Abstract summary: A key challenge in adopting the latest machine learning methods is the representation of programming languages.
To overcome the limitations and challenges of current program representations, we propose a graph-based program representation called PERFOGRAPH.
PerFOGRAPH can capture numerical information and the aggregate data structure by introducing new nodes and edges.
- Score: 12.778336318809092
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The remarkable growth and significant success of machine learning have
expanded its applications into programming languages and program analysis.
However, a key challenge in adopting the latest machine learning methods is the
representation of programming languages, which directly impacts the ability of
machine learning methods to reason about programs. The absence of numerical
awareness, aggregate data structure information, and improper way of presenting
variables in previous representation works have limited their performances. To
overcome the limitations and challenges of current program representations, we
propose a graph-based program representation called PERFOGRAPH. PERFOGRAPH can
capture numerical information and the aggregate data structure by introducing
new nodes and edges. Furthermore, we propose an adapted embedding method to
incorporate numerical awareness. These enhancements make PERFOGRAPH a highly
flexible and scalable representation that effectively captures programs
intricate dependencies and semantics. Consequently, it serves as a powerful
tool for various applications such as program analysis, performance
optimization, and parallelism discovery. Our experimental results demonstrate
that PERFOGRAPH outperforms existing representations and sets new
state-of-the-art results by reducing the error rate by 7.4% (AMD dataset) and
10% (NVIDIA dataset) in the well-known Device Mapping challenge. It also sets
new state-of-the-art results in various performance optimization tasks like
Parallelism Discovery and NUMA and Prefetchers Configuration prediction.
Related papers
- Efficient End-to-end Language Model Fine-tuning on Graphs [21.23522552579571]
Learning from Text-Attributed Graphs (TAGs) has attracted significant attention due to its wide range of real-world applications.
We introduce LEADING, a novel and efficient approach for end-to-end fine-tuning of language models on TAGs.
Our proposed approach demonstrates superior performance, achieving state-of-the-art (SOTA) results on the ogbn-arxiv leaderboard.
arXiv Detail & Related papers (2023-12-07T22:35:16Z) - Learning Generalizable Program and Architecture Representations for Performance Modeling [0.3277163122167434]
PerfVec is a novel deep learning-based performance modeling framework.
It learns high-dimensional and independent/orthogonal program and microarchitecture representations.
PerfVec yields a foundation model that captures the performance essence of instructions.
arXiv Detail & Related papers (2023-10-25T17:24:01Z) - GIF: A General Graph Unlearning Strategy via Influence Function [63.52038638220563]
Graph Influence Function (GIF) is a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $epsilon$-mass perturbation in deleted data.
We conduct extensive experiments on four representative GNN models and three benchmark datasets to justify GIF's superiority in terms of unlearning efficacy, model utility, and unlearning efficiency.
arXiv Detail & Related papers (2023-04-06T03:02:54Z) - Music Instrument Classification Reprogrammed [79.68916470119743]
"Reprogramming" is a technique that utilizes pre-trained deep and complex neural networks originally targeting a different task by modifying and mapping both the input and output of the pre-trained model.
We demonstrate that reprogramming can effectively leverage the power of the representation learned for a different task and that the resulting reprogrammed system can perform on par or even outperform state-of-the-art systems at a fraction of training parameters.
arXiv Detail & Related papers (2022-11-15T18:26:01Z) - Advancing Reacting Flow Simulations with Data-Driven Models [50.9598607067535]
Key to effective use of machine learning tools in multi-physics problems is to couple them to physical and computer models.
The present chapter reviews some of the open opportunities for the application of data-driven reduced-order modeling of combustion systems.
arXiv Detail & Related papers (2022-09-05T16:48:34Z) - Model-Agnostic Graph Regularization for Few-Shot Learning [60.64531995451357]
We present a comprehensive study on graph embedded few-shot learning.
We introduce a graph regularization approach that allows a deeper understanding of the impact of incorporating graph information between labels.
Our approach improves the performance of strong base learners by up to 2% on Mini-ImageNet and 6.7% on ImageNet-FS.
arXiv Detail & Related papers (2021-02-14T05:28:13Z) - Program Enhanced Fact Verification with Verbalization and Graph
Attention Network [25.33739187395408]
We present a Program-enhanced Verbalization and Graph Attention Network (ProgVGAT) to integrate programs and execution into textual inference models.
We construct the graph attention verification networks, which are designed to fuse different sources of evidences from verbalized program execution, program structures, and the original statements and tables.
Experimental results show that the proposed framework achieves the new state-of-the-art performance, a 74.4% accuracy, on the benchmark dataset TABFACT.
arXiv Detail & Related papers (2020-10-06T23:29:08Z) - Graph signal processing for machine learning: A review and new
perspectives [57.285378618394624]
We review a few important contributions made by GSP concepts and tools, such as graph filters and transforms, to the development of novel machine learning algorithms.
We discuss exploiting data structure and relational priors, improving data and computational efficiency, and enhancing model interpretability.
We provide new perspectives on future development of GSP techniques that may serve as a bridge between applied mathematics and signal processing on one side, and machine learning and network science on the other.
arXiv Detail & Related papers (2020-07-31T13:21:33Z) - Beyond Graph Neural Networks with Lifted Relational Neural Networks [14.63152363481139]
We demonstrate a declarative differentiable programming framework based on the language of Lifted Neural Networks.
Small parameterized programs are used to encode learning.
We show how this idea can be used for an efficient encoding of a diverse range of advanced neural networks.
arXiv Detail & Related papers (2020-07-13T10:10:58Z) - Can We Learn Heuristics For Graphical Model Inference Using
Reinforcement Learning? [114.24881214319048]
We show that we can learn programs, i.e., policies, for solving inference in higher order Conditional Random Fields (CRFs) using reinforcement learning.
Our method solves inference tasks efficiently without imposing any constraints on the form of the potentials.
arXiv Detail & Related papers (2020-04-27T19:24:04Z) - ProGraML: Graph-based Deep Learning for Program Optimization and
Analysis [16.520971531754018]
We introduce ProGraML, a graph-based program representation for machine learning.
ProGraML achieves an average 94.0 F1 score, significantly outperforming the state-of-the-art approaches.
We then apply our approach to two high-level tasks - heterogeneous device mapping and program classification - setting new state-of-the-art performance in both.
arXiv Detail & Related papers (2020-03-23T20:27:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.