ChiENN: Embracing Molecular Chirality with Graph Neural Networks
- URL: http://arxiv.org/abs/2307.02198v2
- Date: Mon, 10 Jul 2023 06:23:51 GMT
- Title: ChiENN: Embracing Molecular Chirality with Graph Neural Networks
- Authors: Piotr Gai\'nski, Micha{\l} Koziarski, Jacek Tabor, Marek \'Smieja
- Abstract summary: We propose a theoretically justified message-passing scheme, which makes GNNs sensitive to the order of node neighbors.
We apply that concept in the context of molecular chirality to construct Chiral Edge Neural Network layer which can be appended to any GNN model.
Our experiments show that adding ChiENN layers to a GNN outperforms current state-of-the-art methods in chiral-sensitive molecular property prediction tasks.
- Score: 10.19088492223333
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) play a fundamental role in many deep learning
problems, in particular in cheminformatics. However, typical GNNs cannot
capture the concept of chirality, which means they do not distinguish between
the 3D graph of a chemical compound and its mirror image (enantiomer). The
ability to distinguish between enantiomers is important especially in drug
discovery because enantiomers can have very distinct biochemical properties. In
this paper, we propose a theoretically justified message-passing scheme, which
makes GNNs sensitive to the order of node neighbors. We apply that general
concept in the context of molecular chirality to construct Chiral Edge Neural
Network (ChiENN) layer which can be appended to any GNN model to enable
chirality-awareness. Our experiments show that adding ChiENN layers to a GNN
outperforms current state-of-the-art methods in chiral-sensitive molecular
property prediction tasks.
Related papers
- Spiking Graph Neural Network on Riemannian Manifolds [51.15400848660023]
Graph neural networks (GNNs) have become the dominant solution for learning on graphs.
Existing spiking GNNs consider graphs in Euclidean space, ignoring the structural geometry.
We present a Manifold-valued Spiking GNN (MSG)
MSG achieves superior performance to previous spiking GNNs and energy efficiency to conventional GNNs.
arXiv Detail & Related papers (2024-10-23T15:09:02Z) - Neural Atoms: Propagating Long-range Interaction in Molecular Graphs through Efficient Communication Channel [48.6168145845412]
We propose a method to abstract the collective information of atomic groups into a few $textitNeural Atoms$ by implicitly projecting the atoms of a molecular.
Specifically, we explicitly exchange the information among neural atoms and project them back to the atoms' representations as an enhancement.
With this mechanism, neural atoms establish the communication channels among distant nodes, effectively reducing the interaction scope of arbitrary node pairs into a single hop.
arXiv Detail & Related papers (2023-11-02T14:44:50Z) - HiGNN: Hierarchical Informative Graph Neural Networks for Molecular
Property Prediction Equipped with Feature-Wise Attention [5.735627221409312]
We propose a well-designed hierarchical informative graph neural networks framework (termed HiGNN) for predicting molecular property.
Experiments demonstrate that HiGNN achieves state-of-the-art predictive performance on many challenging drug discovery-associated benchmark datasets.
arXiv Detail & Related papers (2022-08-30T05:16:15Z) - Graph neural networks for the prediction of molecular structure-property
relationships [59.11160990637615]
Graph neural networks (GNNs) are a novel machine learning method that directly work on the molecular graph.
GNNs allow to learn properties in an end-to-end fashion, thereby avoiding the need for informative descriptors.
We describe the fundamentals of GNNs and demonstrate the application of GNNs via two examples for molecular property prediction.
arXiv Detail & Related papers (2022-07-25T11:30:44Z) - Discovering the Representation Bottleneck of Graph Neural Networks from
Multi-order Interactions [51.597480162777074]
Graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions.
Recent works point out that different graph learning tasks require different ranges of interactions between nodes.
We study two common graph construction methods in scientific domains, i.e., emphK-nearest neighbor (KNN) graphs and emphfully-connected (FC) graphs.
arXiv Detail & Related papers (2022-05-15T11:38:14Z) - GemNet: Universal Directional Graph Neural Networks for Molecules [7.484063729015126]
We show that GNNs with directed edge embeddings and two-hop message passing are indeed universal approximators for predictions.
We then leverage these insights and multiple structural improvements to propose the geometric message passing neural network (GemNet)
arXiv Detail & Related papers (2021-06-02T15:44:55Z) - Improving Molecular Graph Neural Network Explainability with
Orthonormalization and Induced Sparsity [0.0]
We propose two simple regularization techniques to apply during the training of GCNNs.
BRO encourages graph convolution operations to generate orthonormal node embeddings.
Gini regularization is applied to the weights of the output layer and constrains the number of dimensions the model can use to make predictions.
arXiv Detail & Related papers (2021-05-11T08:13:34Z) - Weisfeiler-Lehman Embedding for Molecular Graph Neural Networks [15.662820454886203]
A graph neural network (GNN) is a good choice for predicting the chemical properties of molecules.
In this paper, we expand an atom representation using Weisfeiler-Lehman embedding.
We show WL embedding can replace the first two layers of ReLU GNN with a smaller weight norm.
arXiv Detail & Related papers (2020-06-12T02:11:51Z) - Eigen-GNN: A Graph Structure Preserving Plug-in for GNNs [95.63153473559865]
Graph Neural Networks (GNNs) are emerging machine learning models on graphs.
Most existing GNN models in practice are shallow and essentially feature-centric.
We show empirically and analytically that the existing shallow GNNs cannot preserve graph structures well.
We propose Eigen-GNN, a plug-in module to boost GNNs ability in preserving graph structures.
arXiv Detail & Related papers (2020-06-08T02:47:38Z) - Multi-View Graph Neural Networks for Molecular Property Prediction [67.54644592806876]
We present Multi-View Graph Neural Network (MV-GNN), a multi-view message passing architecture.
In MV-GNN, we introduce a shared self-attentive readout component and disagreement loss to stabilize the training process.
We further boost the expressive power of MV-GNN by proposing a cross-dependent message passing scheme.
arXiv Detail & Related papers (2020-05-17T04:46:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.