Directed Graph Attention Neural Network Utilizing 3D Coordinates for
Molecular Property Prediction
- URL: http://arxiv.org/abs/2012.00404v1
- Date: Tue, 1 Dec 2020 11:06:40 GMT
- Title: Directed Graph Attention Neural Network Utilizing 3D Coordinates for
Molecular Property Prediction
- Authors: Chen Qian, Yunhai Xiong and Xiang Chen
- Abstract summary: Kernel method and graph neural networks have been widely studied as two mainstream methods for property prediction.
In this work, we shed light on the Directed Graph Attention Neural Network (DGANN), which only takes chemical bonds as edges.
Our model has matched or outperformed most baseline graph neural networks on QM9 datasets.
- Score: 11.726245297344418
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The prosperity of computer vision (CV) and natural language procession (NLP)
in recent years has spurred the development of deep learning in many other
domains. The advancement in machine learning provides us with an alternative
option besides the computationally expensive density functional theories (DFT).
Kernel method and graph neural networks have been widely studied as two
mainstream methods for property prediction. The promising graph neural networks
have achieved comparable accuracy to the DFT method for specific objects in the
recent study. However, most of the graph neural networks with high precision so
far require fully connected graphs with pairwise distance distribution as edge
information. In this work, we shed light on the Directed Graph Attention Neural
Network (DGANN), which only takes chemical bonds as edges and operates on bonds
and atoms of molecules. DGANN distinguishes from previous models with those
features: (1) It learns the local chemical environment encoding by graph
attention mechanism on chemical bonds. Every initial edge message only flows
into every message passing trajectory once. (2) The transformer blocks
aggregate the global molecular representation from the local atomic encoding.
(3) The position vectors and coordinates are used as inputs instead of
distances. Our model has matched or outperformed most baseline graph neural
networks on QM9 datasets even without thorough hyper-parameters searching.
Moreover, this work suggests that models directly utilizing 3D coordinates can
still reach high accuracies for molecule representation even without rotational
and translational invariance incorporated.
Related papers
- Scalable Graph Compressed Convolutions [68.85227170390864]
We propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution.
Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning.
arXiv Detail & Related papers (2024-07-26T03:14:13Z) - Deep Graph Stream SVDD: Anomaly Detection in Cyber-Physical Systems [17.373668215331737]
We propose a new approach called deep graph vector data description (SVDD) for anomaly detection.
We first use a transformer to preserve both short and long temporal patterns monitoring data in temporal embeddings.
We cluster these embeddings according to sensor type and utilize them to estimate the change in connectivity between various sensors to construct a new weighted graph.
arXiv Detail & Related papers (2023-02-24T22:14:39Z) - Convolutional Neural Networks on Manifolds: From Graphs and Back [122.06927400759021]
We propose a manifold neural network (MNN) composed of a bank of manifold convolutional filters and point-wise nonlinearities.
To sum up, we focus on the manifold model as the limit of large graphs and construct MNNs, while we can still bring back graph neural networks by the discretization of MNNs.
arXiv Detail & Related papers (2022-10-01T21:17:39Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Discovering the Representation Bottleneck of Graph Neural Networks from
Multi-order Interactions [51.597480162777074]
Graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions.
Recent works point out that different graph learning tasks require different ranges of interactions between nodes.
We study two common graph construction methods in scientific domains, i.e., emphK-nearest neighbor (KNN) graphs and emphfully-connected (FC) graphs.
arXiv Detail & Related papers (2022-05-15T11:38:14Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Directional Message Passing on Molecular Graphs via Synthetic
Coordinates [7.314446024059812]
We propose synthetic coordinates that enable the use of advanced GNNs without requiring the true molecular configuration.
We show that with this transformation we can reduce the error of a normal graph neural network by 55% on the ZINC benchmark.
We furthermore set the state of the art on ZINC and coordinate-free QM9 by incorporating synthetic coordinates in the SMP and DimeNet++ models.
arXiv Detail & Related papers (2021-11-08T18:53:58Z) - Graph Neural Networks with Learnable Structural and Positional
Representations [83.24058411666483]
A major issue with arbitrary graphs is the absence of canonical positional information of nodes.
We introduce Positional nodes (PE) of nodes, and inject it into the input layer, like in Transformers.
We observe a performance increase for molecular datasets, from 2.87% up to 64.14% when considering learnable PE for both GNN classes.
arXiv Detail & Related papers (2021-10-15T05:59:15Z) - Learning Graph Neural Networks with Positive and Unlabeled Nodes [34.903471348798725]
Graph neural networks (GNNs) are important tools for transductive learning tasks, such as node classification in graphs.
Most GNN models aggregate information from short distances in each round, and fail to capture long distance relationship in graphs.
In this paper, we propose a novel graph neural network framework, long-short distance aggregation networks (LSDAN) to overcome these limitations.
arXiv Detail & Related papers (2021-03-08T11:43:37Z) - Pseudoinverse Graph Convolutional Networks: Fast Filters Tailored for
Large Eigengaps of Dense Graphs and Hypergraphs [0.0]
Graph Convolutional Networks (GCNs) have proven to be successful tools for semi-supervised classification on graph-based datasets.
We propose a new GCN variant whose three-part filter space is targeted at dense graphs.
arXiv Detail & Related papers (2020-08-03T08:48:41Z) - Isometric Graph Neural Networks [5.306334746787569]
We propose a technique to learn Isometric Graph Neural Networks (IGNN)
IGNN requires changing the input representation space and loss function to enable any GNN algorithm to generate representations that reflect distances between nodes.
We observe a consistent and substantial improvement as high as 400% in Kendall's Tau (KT)
arXiv Detail & Related papers (2020-06-16T22:51:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.