Efficient, Interpretable Atomistic Graph Neural Network Representation
for Angle-dependent Properties and its Application to Optical Spectroscopy
Prediction
- URL: http://arxiv.org/abs/2109.11576v1
- Date: Thu, 23 Sep 2021 18:10:39 GMT
- Title: Efficient, Interpretable Atomistic Graph Neural Network Representation
for Angle-dependent Properties and its Application to Optical Spectroscopy
Prediction
- Authors: Tim Hsu, Nathan Keilbart, Stephen Weitzner, James Chapman, Penghao
Xiao, Tuan Anh Pham, S. Roger Qiu, Xiao Chen, Brandon C. Wood
- Abstract summary: We extend the proposed ALIGNN encoding, which incorporates bond angles, to also include dihedral angles (ALIGNN-d)
This simple extension is shown to lead to a memory-efficient graph representation capable of capturing the geometric information of atomic structures.
We also explore model interpretability based on ALIGNN-d by elucidating the relative contributions of individual structural components to the optical response of the copper complexes.
- Score: 3.2797424029762685
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) are attractive for learning properties of atomic
structures thanks to their intuitive, physically informed graph encoding of
atoms and bonds. However, conventional GNN encodings do not account for angular
information, which is critical for describing complex atomic arrangements in
disordered materials, interfaces, and molecular distortions. In this work, we
extend the recently proposed ALIGNN encoding, which incorporates bond angles,
to also include dihedral angles (ALIGNN-d), and we apply the model to capture
the structures of aqua copper complexes for spectroscopy prediction. This
simple extension is shown to lead to a memory-efficient graph representation
capable of capturing the full geometric information of atomic structures.
Specifically, the ALIGNN-d encoding is a sparse yet equally expressive
representation compared to the dense, maximally-connected graph, in which all
bonds are encoded. We also explore model interpretability based on ALIGNN-d by
elucidating the relative contributions of individual structural components to
the optical response of the copper complexes. Lastly, we briefly discuss future
developments to validate the computational efficiency and to extend the
interpretability of ALIGNN-d.
Related papers
- Do Graph Neural Networks Work for High Entropy Alloys? [12.002942104379986]
High-entropy alloys (HEAs) lack chemical long-range order, limiting the applicability of current graph representations.
We introduce the LESets machine learning model, an accurate, interpretable GNN for HEA property prediction.
We demonstrate the accuracy of LESets in modeling the mechanical properties ofquaternary HEAs.
arXiv Detail & Related papers (2024-08-29T08:20:02Z) - DGNN: Decoupled Graph Neural Networks with Structural Consistency
between Attribute and Graph Embedding Representations [62.04558318166396]
Graph neural networks (GNNs) demonstrate a robust capability for representation learning on graphs with complex structures.
A novel GNNs framework, dubbed Decoupled Graph Neural Networks (DGNN), is introduced to obtain a more comprehensive embedding representation of nodes.
Experimental results conducted on several graph benchmark datasets verify DGNN's superiority in node classification task.
arXiv Detail & Related papers (2024-01-28T06:43:13Z) - Atomic and Subgraph-aware Bilateral Aggregation for Molecular
Representation Learning [57.670845619155195]
We introduce a new model for molecular representation learning called the Atomic and Subgraph-aware Bilateral Aggregation (ASBA)
ASBA addresses the limitations of previous atom-wise and subgraph-wise models by incorporating both types of information.
Our method offers a more comprehensive way to learn representations for molecular property prediction and has broad potential in drug and material discovery applications.
arXiv Detail & Related papers (2023-05-22T00:56:00Z) - Graph neural networks for the prediction of molecular structure-property
relationships [59.11160990637615]
Graph neural networks (GNNs) are a novel machine learning method that directly work on the molecular graph.
GNNs allow to learn properties in an end-to-end fashion, thereby avoiding the need for informative descriptors.
We describe the fundamentals of GNNs and demonstrate the application of GNNs via two examples for molecular property prediction.
arXiv Detail & Related papers (2022-07-25T11:30:44Z) - All-optical graph representation learning using integrated diffractive
photonic computing units [51.15389025760809]
Photonic neural networks perform brain-inspired computations using photons instead of electrons.
We propose an all-optical graph representation learning architecture, termed diffractive graph neural network (DGNN)
We demonstrate the use of DGNN extracted features for node and graph-level classification tasks with benchmark databases and achieve superior performance.
arXiv Detail & Related papers (2022-04-23T02:29:48Z) - Molecular Graph Generation via Geometric Scattering [7.796917261490019]
Graph neural networks (GNNs) have been used extensively for addressing problems in drug design and discovery.
We propose a representation-first approach to molecular graph generation.
We show that our architecture learns meaningful representations of drug datasets and provides a platform for goal-directed drug synthesis.
arXiv Detail & Related papers (2021-10-12T18:00:23Z) - Structure-aware Interactive Graph Neural Networks for the Prediction of
Protein-Ligand Binding Affinity [52.67037774136973]
Drug discovery often relies on the successful prediction of protein-ligand binding affinity.
Recent advances have shown great promise in applying graph neural networks (GNNs) for better affinity prediction by learning the representations of protein-ligand complexes.
We propose a structure-aware interactive graph neural network (SIGN) which consists of two components: polar-inspired graph attention layers (PGAL) and pairwise interactive pooling (PiPool)
arXiv Detail & Related papers (2021-07-21T03:34:09Z) - Multi-View Graph Neural Networks for Molecular Property Prediction [67.54644592806876]
We present Multi-View Graph Neural Network (MV-GNN), a multi-view message passing architecture.
In MV-GNN, we introduce a shared self-attentive readout component and disagreement loss to stabilize the training process.
We further boost the expressive power of MV-GNN by proposing a cross-dependent message passing scheme.
arXiv Detail & Related papers (2020-05-17T04:46:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.