ProGReST: Prototypical Graph Regression Soft Trees for Molecular
Property Prediction
- URL: http://arxiv.org/abs/2210.03745v1
- Date: Fri, 7 Oct 2022 10:21:24 GMT
- Title: ProGReST: Prototypical Graph Regression Soft Trees for Molecular
Property Prediction
- Authors: Dawid Rymarczyk, Daniel Dobrowolski, Tomasz Danel
- Abstract summary: Prototypical Graph Regression Self-explainable Trees (ProGReST) model combines prototype learning, soft decision trees, and Graph Neural Networks.
In ProGReST, the rationale is obtained along with prediction due to the model's built-in interpretability.
- Score: 1.6114012813668934
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we propose the novel Prototypical Graph Regression
Self-explainable Trees (ProGReST) model, which combines prototype learning,
soft decision trees, and Graph Neural Networks. In contrast to other works, our
model can be used to address various challenging tasks, including compound
property prediction. In ProGReST, the rationale is obtained along with
prediction due to the model's built-in interpretability. Additionally, we
introduce a new graph prototype projection to accelerate model training.
Finally, we evaluate PRoGReST on a wide range of chemical datasets for
molecular property prediction and perform in-depth analysis with chemical
experts to evaluate obtained interpretations. Our method achieves competitive
results against state-of-the-art methods.
Related papers
- Supervised Score-Based Modeling by Gradient Boosting [49.556736252628745]
We propose a Supervised Score-based Model (SSM) which can be viewed as a gradient boosting algorithm combining score matching.
We provide a theoretical analysis of learning and sampling for SSM to balance inference time and prediction accuracy.
Our model outperforms existing models in both accuracy and inference time.
arXiv Detail & Related papers (2024-11-02T07:06:53Z) - Global Concept Explanations for Graphs by Contrastive Learning [0.6906005491572401]
We propose a method to extract global concept explanations from the predictions of graph neural networks.
We conduct computational experiments on synthetic and real-world graph property prediction tasks.
arXiv Detail & Related papers (2024-04-25T11:43:46Z) - Interpretable Prototype-based Graph Information Bottleneck [22.25047783463307]
We propose a novel framework of explainable Graph Neural Networks (GNNs) called interpretable Prototype-based Graph Information Bottleneck (PGIB)
PGIB incorporates prototype learning within the information bottleneck framework to provide prototypes with the key subgraph from the input graph that is important for the model prediction.
Extensive experiments, including qualitative analysis, demonstrate that PGIB outperforms state-of-the-art methods in terms of both prediction performance and explainability.
arXiv Detail & Related papers (2023-10-30T18:16:19Z) - Enhancing Model Learning and Interpretation Using Multiple Molecular
Graph Representations for Compound Property and Activity Prediction [0.0]
This research introduces multiple molecular graph representations that incorporate higher-level information.
It investigates their effects on model learning and interpretation from diverse perspectives.
The results indicate that combining atom graph representation with reduced molecular graph representation can yield promising model performance.
arXiv Detail & Related papers (2023-04-13T04:20:30Z) - Robust Causal Graph Representation Learning against Confounding Effects [21.380907101361643]
We propose Robust Causal Graph Representation Learning (RCGRL) to learn robust graph representations against confounding effects.
RCGRL introduces an active approach to generate instrumental variables under unconditional moment restrictions, which empowers the graph representation learning model to eliminate confounders.
arXiv Detail & Related papers (2022-08-18T01:31:25Z) - Graph neural networks for the prediction of molecular structure-property
relationships [59.11160990637615]
Graph neural networks (GNNs) are a novel machine learning method that directly work on the molecular graph.
GNNs allow to learn properties in an end-to-end fashion, thereby avoiding the need for informative descriptors.
We describe the fundamentals of GNNs and demonstrate the application of GNNs via two examples for molecular property prediction.
arXiv Detail & Related papers (2022-07-25T11:30:44Z) - GraphSVX: Shapley Value Explanations for Graph Neural Networks [81.83769974301995]
Graph Neural Networks (GNNs) achieve significant performance for various learning tasks on geometric data.
In this paper, we propose a unified framework satisfied by most existing GNN explainers.
We introduce GraphSVX, a post hoc local model-agnostic explanation method specifically designed for GNNs.
arXiv Detail & Related papers (2021-04-18T10:40:37Z) - Interpreting Graph Neural Networks for NLP With Differentiable Edge
Masking [63.49779304362376]
Graph neural networks (GNNs) have become a popular approach to integrating structural inductive biases into NLP models.
We introduce a post-hoc method for interpreting the predictions of GNNs which identifies unnecessary edges.
We show that we can drop a large proportion of edges without deteriorating the performance of the model.
arXiv Detail & Related papers (2020-10-01T17:51:19Z) - Physics-Constrained Predictive Molecular Latent Space Discovery with
Graph Scattering Variational Autoencoder [0.0]
We develop a molecular generative model based on variational inference and graph theory in the small data regime.
The model's performance is evaluated by generating molecules with desired target properties.
arXiv Detail & Related papers (2020-09-29T09:05:27Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - Graph Neural Networks for the Prediction of Substrate-Specific Organic
Reaction Conditions [79.45090959869124]
We present a systematic investigation using graph neural networks (GNNs) to model organic chemical reactions.
We evaluate seven different GNN architectures for classification tasks pertaining to the identification of experimental reagents and conditions.
arXiv Detail & Related papers (2020-07-08T17:21:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.