Learning Attributed Graph Representations with Communicative Message
Passing Transformer
- URL: http://arxiv.org/abs/2107.08773v1
- Date: Mon, 19 Jul 2021 11:58:32 GMT
- Title: Learning Attributed Graph Representations with Communicative Message
Passing Transformer
- Authors: Jianwen Chen, Shuangjia Zheng, Ying Song, Jiahua Rao, Yuedong Yang
- Abstract summary: We propose a Communicative Message Passing Transformer (CoMPT) neural network to improve the molecular graph representation.
Unlike the previous transformer-style GNNs that treat molecules as fully connected graphs, we introduce a message diffusion mechanism to leverage the graph connectivity inductive bias.
- Score: 3.812358821429274
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Constructing appropriate representations of molecules lies at the core of
numerous tasks such as material science, chemistry and drug designs. Recent
researches abstract molecules as attributed graphs and employ graph neural
networks (GNN) for molecular representation learning, which have made
remarkable achievements in molecular graph modeling. Albeit powerful, current
models either are based on local aggregation operations and thus miss
higher-order graph properties or focus on only node information without fully
using the edge information. For this sake, we propose a Communicative Message
Passing Transformer (CoMPT) neural network to improve the molecular graph
representation by reinforcing message interactions between nodes and edges
based on the Transformer architecture. Unlike the previous transformer-style
GNNs that treat molecules as fully connected graphs, we introduce a message
diffusion mechanism to leverage the graph connectivity inductive bias and
reduce the message enrichment explosion. Extensive experiments demonstrated
that the proposed model obtained superior performances (around 4$\%$ on
average) against state-of-the-art baselines on seven chemical property datasets
(graph-level tasks) and two chemical shift datasets (node-level tasks). Further
visualization studies also indicated a better representation capacity achieved
by our model.
Related papers
- Pre-trained Molecular Language Models with Random Functional Group Masking [54.900360309677794]
We propose a SMILES-based underlineem Molecular underlineem Language underlineem Model, which randomly masking SMILES subsequences corresponding to specific molecular atoms.
This technique aims to compel the model to better infer molecular structures and properties, thus enhancing its predictive capabilities.
arXiv Detail & Related papers (2024-11-03T01:56:15Z) - Molecular Graph Representation Learning via Structural Similarity Information [11.38130169319915]
We introduce the textbf Structural Similarity Motif GNN (MSSM-GNN), a novel molecular graph representation learning method.
In particular, we propose a specially designed graph that leverages graph kernel algorithms to represent the similarity between molecules quantitatively.
We employ GNNs to learn feature representations from molecular graphs, aiming to enhance the accuracy of property prediction by incorporating additional molecular representation information.
arXiv Detail & Related papers (2024-09-13T06:59:10Z) - MolGrapher: Graph-based Visual Recognition of Chemical Structures [50.13749978547401]
We introduce MolGrapher to recognize chemical structures visually.
We treat all candidate atoms and bonds as nodes and put them in a graph.
We classify atom and bond nodes in the graph with a Graph Neural Network.
arXiv Detail & Related papers (2023-08-23T16:16:11Z) - Bi-level Contrastive Learning for Knowledge-Enhanced Molecule
Representations [55.42602325017405]
We propose a novel method called GODE, which takes into account the two-level structure of individual molecules.
By pre-training two graph neural networks (GNNs) on different graph structures, combined with contrastive learning, GODE fuses molecular structures with their corresponding knowledge graph substructures.
When fine-tuned across 11 chemical property tasks, our model outperforms existing benchmarks, registering an average ROC-AUC uplift of 13.8% for classification tasks and an average RMSE/MAE enhancement of 35.1% for regression tasks.
arXiv Detail & Related papers (2023-06-02T15:49:45Z) - Molecular Joint Representation Learning via Multi-modal Information [11.493011069441188]
We propose a novel framework of molecular joint representation learning via Multi-Modal information of SMILES and molecular Graphs, called MMSG.
We improve the self-attention mechanism by introducing bond level graph representation as attention bias in Transformer.
We further propose a Bidirectional Message Communication Graph Neural Network (BMC GNN) to strengthen the information flow aggregated from graphs for further combination.
arXiv Detail & Related papers (2022-11-25T11:53:23Z) - Graph neural networks for the prediction of molecular structure-property
relationships [59.11160990637615]
Graph neural networks (GNNs) are a novel machine learning method that directly work on the molecular graph.
GNNs allow to learn properties in an end-to-end fashion, thereby avoiding the need for informative descriptors.
We describe the fundamentals of GNNs and demonstrate the application of GNNs via two examples for molecular property prediction.
arXiv Detail & Related papers (2022-07-25T11:30:44Z) - Molecular Graph Generation via Geometric Scattering [7.796917261490019]
Graph neural networks (GNNs) have been used extensively for addressing problems in drug design and discovery.
We propose a representation-first approach to molecular graph generation.
We show that our architecture learns meaningful representations of drug datasets and provides a platform for goal-directed drug synthesis.
arXiv Detail & Related papers (2021-10-12T18:00:23Z) - Self-Supervised Graph Transformer on Large-Scale Molecular Data [73.3448373618865]
We propose a novel framework, GROVER, for molecular representation learning.
GROVER can learn rich structural and semantic information of molecules from enormous unlabelled molecular data.
We pre-train GROVER with 100 million parameters on 10 million unlabelled molecules -- the biggest GNN and the largest training dataset in molecular representation learning.
arXiv Detail & Related papers (2020-06-18T08:37:04Z) - Multi-View Graph Neural Networks for Molecular Property Prediction [67.54644592806876]
We present Multi-View Graph Neural Network (MV-GNN), a multi-view message passing architecture.
In MV-GNN, we introduce a shared self-attentive readout component and disagreement loss to stabilize the training process.
We further boost the expressive power of MV-GNN by proposing a cross-dependent message passing scheme.
arXiv Detail & Related papers (2020-05-17T04:46:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.