Graph Neural Networks in Particle Physics: Implementations, Innovations,
and Challenges
- URL: http://arxiv.org/abs/2203.12852v1
- Date: Wed, 23 Mar 2022 04:36:04 GMT
- Title: Graph Neural Networks in Particle Physics: Implementations, Innovations,
and Challenges
- Authors: Savannah Thais, Paolo Calafiura, Grigorios Chachamis, Gage DeZoort,
Javier Duarte, Sanmay Ganguly, Michael Kagan, Daniel Murnane, Mark S.
Neubauer, Kazuhiro Terao
- Abstract summary: We present a range of capabilities that are currently being well-adopted in HEP communities, and which are still immature.
With the wide-spread adoption of GNNs in industry, the HEP community is well-placed to benefit from rapid improvements in GNN latency and memory usage.
We hope to capture the landscape of graph techniques in machine learning as well as point out the most significant gaps that are inhibiting potentially large leaps in research.
- Score: 7.071890461446324
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many physical systems can be best understood as sets of discrete data with
associated relationships. Where previously these sets of data have been
formulated as series or image data to match the available machine learning
architectures, with the advent of graph neural networks (GNNs), these systems
can be learned natively as graphs. This allows a wide variety of high- and
low-level physical features to be attached to measurements and, by the same
token, a wide variety of HEP tasks to be accomplished by the same GNN
architectures. GNNs have found powerful use-cases in reconstruction, tagging,
generation and end-to-end analysis. With the wide-spread adoption of GNNs in
industry, the HEP community is well-placed to benefit from rapid improvements
in GNN latency and memory usage. However, industry use-cases are not perfectly
aligned with HEP and much work needs to be done to best match unique GNN
capabilities to unique HEP obstacles. We present here a range of these
capabilities, predictions of which are currently being well-adopted in HEP
communities, and which are still immature. We hope to capture the landscape of
graph techniques in machine learning as well as point out the most significant
gaps that are inhibiting potentially large leaps in research.
Related papers
- Enabling Accelerators for Graph Computing [0.0]
Graph Neural Networks (GNNs) offer a novel paradigm for learning on graph-structured data.
GNNs present new computational challenges compared to conventional neural networks.
This thesis aims to develop a better understanding of how GNNs interact with the underlying hardware.
arXiv Detail & Related papers (2023-12-16T23:31:20Z) - Information Flow in Graph Neural Networks: A Clinical Triage Use Case [49.86931948849343]
Graph Neural Networks (GNNs) have gained popularity in healthcare and other domains due to their ability to process multi-modal and multi-relational graphs.
We investigate how the flow of embedding information within GNNs affects the prediction of links in Knowledge Graphs (KGs)
Our results demonstrate that incorporating domain knowledge into the GNN connectivity leads to better performance than using the same connectivity as the KG or allowing unconstrained embedding propagation.
arXiv Detail & Related papers (2023-09-12T09:18:12Z) - The Evolution of Distributed Systems for Graph Neural Networks and their
Origin in Graph Processing and Deep Learning: A Survey [17.746899445454048]
Graph Neural Networks (GNNs) are an emerging research field.
GNNs can be applied to various domains including recommendation systems, computer vision, natural language processing, biology and chemistry.
We aim to fill this gap by summarizing and categorizing important methods and techniques for large-scale GNN solutions.
arXiv Detail & Related papers (2023-05-23T09:22:33Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - Simple and Efficient Heterogeneous Graph Neural Network [55.56564522532328]
Heterogeneous graph neural networks (HGNNs) have powerful capability to embed rich structural and semantic information of a heterogeneous graph into node representations.
Existing HGNNs inherit many mechanisms from graph neural networks (GNNs) over homogeneous graphs, especially the attention mechanism and the multi-layer structure.
This paper conducts an in-depth and detailed study of these mechanisms and proposes Simple and Efficient Heterogeneous Graph Neural Network (SeHGNN)
arXiv Detail & Related papers (2022-07-06T10:01:46Z) - EIGNN: Efficient Infinite-Depth Graph Neural Networks [51.97361378423152]
Graph neural networks (GNNs) are widely used for modelling graph-structured data in numerous applications.
Motivated by this limitation, we propose a GNN model with infinite depth, which we call Efficient Infinite-Depth Graph Neural Networks (EIGNN)
We show that EIGNN has a better ability to capture long-range dependencies than recent baselines, and consistently achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-02-22T08:16:58Z) - Graph Neural Networks with Learnable Structural and Positional
Representations [83.24058411666483]
A major issue with arbitrary graphs is the absence of canonical positional information of nodes.
We introduce Positional nodes (PE) of nodes, and inject it into the input layer, like in Transformers.
We observe a performance increase for molecular datasets, from 2.87% up to 64.14% when considering learnable PE for both GNN classes.
arXiv Detail & Related papers (2021-10-15T05:59:15Z) - IGNNITION: Bridging the Gap Between Graph Neural Networks and Networking
Systems [4.1591055164123665]
We present IGNNITION, a novel open-source framework that enables fast prototyping of Graph Neural Networks (GNNs) for networking systems.
IGNNITION is based on an intuitive high-level abstraction that hides the complexity behind GNNs.
Our results show that the GNN models produced by IGNNITION are equivalent in terms of accuracy and performance to their native implementations.
arXiv Detail & Related papers (2021-09-14T14:28:21Z) - Analyzing the Performance of Graph Neural Networks with Pipe Parallelism [2.269587850533721]
We focus on Graph Neural Networks (GNNs) that have found great success in tasks such as node or edge classification and link prediction.
New approaches for processing larger networks are needed to advance graph techniques.
We study how GNNs could be parallelized using existing tools and frameworks that are known to be successful in the deep learning community.
arXiv Detail & Related papers (2020-12-20T04:20:38Z) - Computing Graph Neural Networks: A Survey from Algorithms to
Accelerators [2.491032752533246]
Graph Neural Networks (GNNs) have exploded onto the machine learning scene in recent years owing to their capability to model and learn from graph-structured data.
This paper aims to make two main contributions: a review of the field of GNNs is presented from the perspective of computing.
An in-depth analysis of current software and hardware acceleration schemes is provided.
arXiv Detail & Related papers (2020-09-30T22:29:27Z) - Eigen-GNN: A Graph Structure Preserving Plug-in for GNNs [95.63153473559865]
Graph Neural Networks (GNNs) are emerging machine learning models on graphs.
Most existing GNN models in practice are shallow and essentially feature-centric.
We show empirically and analytically that the existing shallow GNNs cannot preserve graph structures well.
We propose Eigen-GNN, a plug-in module to boost GNNs ability in preserving graph structures.
arXiv Detail & Related papers (2020-06-08T02:47:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.