Domain-informed graph neural networks: a quantum chemistry case study
- URL: http://arxiv.org/abs/2208.11934v1
- Date: Thu, 25 Aug 2022 08:36:50 GMT
- Title: Domain-informed graph neural networks: a quantum chemistry case study
- Authors: Jay Morgan, Adeline Paiement, and Christian Klinke
- Abstract summary: We focus on graph neural networks (GNN), with a use case of estimating the potential energy of chemical systems (molecules and crystals) represented as graphs.
We integrate two elements of domain knowledge into the design of the GNN to constrain and regularise its learning, towards higher accuracy and generalisation.
- Score: 0.34410212782758054
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We explore different strategies to integrate prior domain knowledge into the
design of a deep neural network (DNN). We focus on graph neural networks (GNN),
with a use case of estimating the potential energy of chemical systems
(molecules and crystals) represented as graphs. We integrate two elements of
domain knowledge into the design of the GNN to constrain and regularise its
learning, towards higher accuracy and generalisation. First, knowledge on the
existence of different types of relations (chemical bonds) between atoms is
used to modulate the interaction of nodes in the GNN. Second, knowledge of the
relevance of some physical quantities is used to constrain the learnt features
towards a higher physical relevance using a simple multi-task paradigm. We
demonstrate the general applicability of our knowledge integrations by applying
them to two architectures that rely on different mechanisms to propagate
information between nodes and to update node states.
Related papers
- Pushing the Limits of All-Atom Geometric Graph Neural Networks: Pre-Training, Scaling and Zero-Shot Transfer [15.302727191576784]
Geometric graph neural networks (Geom-GNNs) with all-atom information have transformed atomistic simulations.
We study the scaling behaviors of Geom-GNNs under self-supervised pre-training, supervised and unsupervised learning setups.
We show how all-atom graph embedding can be organically combined with other neural architectures to enhance the expressive power.
arXiv Detail & Related papers (2024-10-29T03:07:33Z) - Deep Neural Networks via Complex Network Theory: a Perspective [3.1023851130450684]
Deep Neural Networks (DNNs) can be represented as graphs whose links and vertices iteratively process data and solve tasks sub-optimally. Complex Network Theory (CNT), merging statistical physics with graph theory, provides a method for interpreting neural networks by analysing their weights and neuron structures.
In this work, we extend the existing CNT metrics with measures that sample from the DNNs' training distribution, shifting from a purely topological analysis to one that connects with the interpretability of deep learning.
arXiv Detail & Related papers (2024-04-17T08:42:42Z) - A Comparison Between Invariant and Equivariant Classical and Quantum Graph Neural Networks [3.350407101925898]
Deep geometric methods, such as graph neural networks (GNNs), have been leveraged for various data analysis tasks in high-energy physics.
One typical task is jet tagging, where jets are viewed as point clouds with distinct features and edge connections between their constituent particles.
In this paper, we perform a fair and comprehensive comparison between classical graph neural networks (GNNs) and their quantum counterparts.
arXiv Detail & Related papers (2023-11-30T16:19:13Z) - Tensor Networks Meet Neural Networks: A Survey and Future Perspectives [27.878669143107885]
tensorial neural networks (TNNs) and neural networks (NNs) are two fundamental data modeling approaches.
TNs solve the curse of dimensionality in large-scale tensors by converting an exponential number of dimensions to complexity.
NNs have displayed exceptional performance in various applications, e.g., computer vision, natural language processing, and robotics research.
arXiv Detail & Related papers (2023-01-22T17:35:56Z) - Geometric Knowledge Distillation: Topology Compression for Graph Neural
Networks [80.8446673089281]
We study a new paradigm of knowledge transfer that aims at encoding graph topological information into graph neural networks (GNNs)
We propose Neural Heat Kernel (NHK) to encapsulate the geometric property of the underlying manifold concerning the architecture of GNNs.
A fundamental and principled solution is derived by aligning NHKs on teacher and student models, dubbed as Geometric Knowledge Distillation.
arXiv Detail & Related papers (2022-10-24T08:01:58Z) - Knowledge Enhanced Neural Networks for relational domains [83.9217787335878]
We focus on a specific method, KENN, a Neural-Symbolic architecture that injects prior logical knowledge into a neural network.
In this paper, we propose an extension of KENN for relational data.
arXiv Detail & Related papers (2022-05-31T13:00:34Z) - Discovering the Representation Bottleneck of Graph Neural Networks from
Multi-order Interactions [51.597480162777074]
Graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions.
Recent works point out that different graph learning tasks require different ranges of interactions between nodes.
We study two common graph construction methods in scientific domains, i.e., emphK-nearest neighbor (KNN) graphs and emphfully-connected (FC) graphs.
arXiv Detail & Related papers (2022-05-15T11:38:14Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - Graph Neural Networks with Learnable Structural and Positional
Representations [83.24058411666483]
A major issue with arbitrary graphs is the absence of canonical positional information of nodes.
We introduce Positional nodes (PE) of nodes, and inject it into the input layer, like in Transformers.
We observe a performance increase for molecular datasets, from 2.87% up to 64.14% when considering learnable PE for both GNN classes.
arXiv Detail & Related papers (2021-10-15T05:59:15Z) - A Practical Tutorial on Graph Neural Networks [49.919443059032226]
Graph neural networks (GNNs) have recently grown in popularity in the field of artificial intelligence (AI)
This tutorial exposes the power and novelty of GNNs to AI practitioners.
arXiv Detail & Related papers (2020-10-11T12:36:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.