Novel Representation Learning Technique using Graphs for Performance
Analytics
- URL: http://arxiv.org/abs/2401.10799v1
- Date: Fri, 19 Jan 2024 16:34:37 GMT
- Title: Novel Representation Learning Technique using Graphs for Performance
Analytics
- Authors: Tarek Ramadan, Ankur Lahiry, Tanzima Z. Islam
- Abstract summary: We propose a novel idea of transforming performance data into graphs to leverage the advancement of Graph Neural Network-based (GNN) techniques.
In contrast to other Machine Learning application domains, such as social networks, the graph is not given; instead, we need to build it.
We evaluate the effectiveness of the generated embeddings from GNNs based on how well they make even a simple feed-forward neural network perform for regression tasks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The performance analytics domain in High Performance Computing (HPC) uses
tabular data to solve regression problems, such as predicting the execution
time. Existing Machine Learning (ML) techniques leverage the correlations among
features given tabular datasets, not leveraging the relationships between
samples directly. Moreover, since high-quality embeddings from raw features
improve the fidelity of the downstream predictive models, existing methods rely
on extensive feature engineering and pre-processing steps, costing time and
manual effort. To fill these two gaps, we propose a novel idea of transforming
tabular performance data into graphs to leverage the advancement of Graph
Neural Network-based (GNN) techniques in capturing complex relationships
between features and samples. In contrast to other ML application domains, such
as social networks, the graph is not given; instead, we need to build it. To
address this gap, we propose graph-building methods where nodes represent
samples, and the edges are automatically inferred iteratively based on the
similarity between the features in the samples. We evaluate the effectiveness
of the generated embeddings from GNNs based on how well they make even a simple
feed-forward neural network perform for regression tasks compared to other
state-of-the-art representation learning techniques. Our evaluation
demonstrates that even with up to 25% random missing values for each dataset,
our method outperforms commonly used graph and Deep Neural Network (DNN)-based
approaches and achieves up to 61.67% & 78.56% improvement in MSE loss over the
DNN baseline respectively for HPC dataset and Machine Learning Datasets.
Related papers
- TANGNN: a Concise, Scalable and Effective Graph Neural Networks with Top-m Attention Mechanism for Graph Representation Learning [7.879217146851148]
We propose an innovative Graph Neural Network (GNN) architecture that integrates a Top-m attention mechanism aggregation component and a neighborhood aggregation component.
To assess the effectiveness of our proposed model, we have applied it to citation sentiment prediction, a novel task previously unexplored in the GNN field.
arXiv Detail & Related papers (2024-11-23T05:31:25Z) - Enhanced Expressivity in Graph Neural Networks with Lanczos-Based Linear Constraints [7.605749412696919]
Graph Neural Networks (GNNs) excel in handling graph-structured data but often underperform in link prediction tasks.
We present a novel method to enhance the expressivity of GNNs by embedding induced subgraphs into the graph Laplacian matrix's eigenbasis.
Our method achieves 20x and 10x speedup by only requiring 5% and 10% data from the PubMed and OGBL-Vessel datasets.
arXiv Detail & Related papers (2024-08-22T12:22:00Z) - Chasing Fairness in Graphs: A GNN Architecture Perspective [73.43111851492593]
We propose textsfFair textsfMessage textsfPassing (FMP) designed within a unified optimization framework for graph neural networks (GNNs)
In FMP, the aggregation is first adopted to utilize neighbors' information and then the bias mitigation step explicitly pushes demographic group node presentation centers together.
Experiments on node classification tasks demonstrate that the proposed FMP outperforms several baselines in terms of fairness and accuracy on three real-world datasets.
arXiv Detail & Related papers (2023-12-19T18:00:15Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - Challenging the Myth of Graph Collaborative Filtering: a Reasoned and Reproducibility-driven Analysis [50.972595036856035]
We present a code that successfully replicates results from six popular and recent graph recommendation models.
We compare these graph models with traditional collaborative filtering models that historically performed well in offline evaluations.
By investigating the information flow from users' neighborhoods, we aim to identify which models are influenced by intrinsic features in the dataset structure.
arXiv Detail & Related papers (2023-08-01T09:31:44Z) - Dynamic Graph Message Passing Networks for Visual Recognition [112.49513303433606]
Modelling long-range dependencies is critical for scene understanding tasks in computer vision.
A fully-connected graph is beneficial for such modelling, but its computational overhead is prohibitive.
We propose a dynamic graph message passing network, that significantly reduces the computational complexity.
arXiv Detail & Related papers (2022-09-20T14:41:37Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Graph-based Active Learning for Semi-supervised Classification of SAR
Data [8.92985438874948]
We present a novel method for classification of Synthetic Aperture Radar (SAR) data by combining ideas from graph-based learning and neural network methods.
CNNVAE feature embedding and graph construction requires no labeled data, which reduces overfitting.
The method easily incorporates a human-in-the-loop for active learning in the data-labeling process.
arXiv Detail & Related papers (2022-03-31T00:14:06Z) - Towards Open-World Feature Extrapolation: An Inductive Graph Learning
Approach [80.8446673089281]
We propose a new learning paradigm with graph representation and learning.
Our framework contains two modules: 1) a backbone network (e.g., feedforward neural nets) as a lower model takes features as input and outputs predicted labels; 2) a graph neural network as an upper model learns to extrapolate embeddings for new features via message passing over a feature-data graph built from observed data.
arXiv Detail & Related papers (2021-10-09T09:02:45Z) - Analyzing the Performance of Graph Neural Networks with Pipe Parallelism [2.269587850533721]
We focus on Graph Neural Networks (GNNs) that have found great success in tasks such as node or edge classification and link prediction.
New approaches for processing larger networks are needed to advance graph techniques.
We study how GNNs could be parallelized using existing tools and frameworks that are known to be successful in the deep learning community.
arXiv Detail & Related papers (2020-12-20T04:20:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.