Controllable Edge-Type-Specific Interpretation in Multi-Relational Graph Neural Networks for Drug Response Prediction
- URL: http://arxiv.org/abs/2408.17129v2
- Date: Tue, 3 Sep 2024 08:45:37 GMT
- Title: Controllable Edge-Type-Specific Interpretation in Multi-Relational Graph Neural Networks for Drug Response Prediction
- Authors: Xiaodi Li, Jianfeng Gui, Qian Gao, Haoyuan Shi, Zhenyu Yue,
- Abstract summary: We propose a novel post-hoc interpretability algorithm for cancer drug response prediction, CETExplainer.
It incorporates a controllable edge-type-specific weighting mechanism to provide fine-grained, biologically meaningful explanations for predictive models.
Empirical analysis on the real-world dataset demonstrates that CETExplainer achieves superior stability and improves explanation quality compared to leading algorithms.
- Score: 6.798254568821052
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks have been widely applied in critical decision-making areas that demand interpretable predictions, leading to the flourishing development of interpretability algorithms. However, current graph interpretability algorithms tend to emphasize generality and often overlook biological significance, thereby limiting their applicability in predicting cancer drug responses. In this paper, we propose a novel post-hoc interpretability algorithm for cancer drug response prediction, CETExplainer, which incorporates a controllable edge-type-specific weighting mechanism. It considers the mutual information between subgraphs and predictions, proposing a structural scoring approach to provide fine-grained, biologically meaningful explanations for predictive models. We also introduce a method for constructing ground truth based on real-world datasets to quantitatively evaluate the proposed interpretability algorithm. Empirical analysis on the real-world dataset demonstrates that CETExplainer achieves superior stability and improves explanation quality compared to leading algorithms, thereby offering a robust and insightful tool for cancer drug prediction.
Related papers
- Reasoning-Enhanced Healthcare Predictions with Knowledge Graph Community Retrieval [61.70489848327436]
KARE is a novel framework that integrates knowledge graph (KG) community-level retrieval with large language models (LLMs) reasoning.
Extensive experiments demonstrate that KARE outperforms leading models by up to 10.8-15.0% on MIMIC-III and 12.6-12.7% on MIMIC-IV for mortality and readmission predictions.
arXiv Detail & Related papers (2024-10-06T18:46:28Z) - DRExplainer: Quantifiable Interpretability in Drug Response Prediction with Directed Graph Convolutional Network [9.641021461914551]
We propose a novel interpretable predictive model, DRExplainer, for drug response prediction.
DRExplainer constructs a directed bipartite network integrating multi-omics profiles of cell lines, the chemical structure of drugs and known drug response.
In computational experiments, DRExplainer outperforms state-of-the-art predictive methods and another graph-based explanation method.
arXiv Detail & Related papers (2024-08-22T05:45:48Z) - Variational and Explanatory Neural Networks for Encoding Cancer Profiles and Predicting Drug Responses [40.80133767939435]
Existing AI models face challenges due to noise in transcriptomics data and lack of biological interpretability.
We introduce VETE, a novel neural network framework that incorporates a variational component to mitigate noise effects.
VETE bridges the gap between AI-driven predictions and biologically meaningful insights in cancer research.
arXiv Detail & Related papers (2024-07-05T13:13:02Z) - Seeing Unseen: Discover Novel Biomedical Concepts via
Geometry-Constrained Probabilistic Modeling [53.7117640028211]
We present a geometry-constrained probabilistic modeling treatment to resolve the identified issues.
We incorporate a suite of critical geometric properties to impose proper constraints on the layout of constructed embedding space.
A spectral graph-theoretic method is devised to estimate the number of potential novel classes.
arXiv Detail & Related papers (2024-03-02T00:56:05Z) - NeuroExplainer: Fine-Grained Attention Decoding to Uncover Cortical
Development Patterns of Preterm Infants [73.85768093666582]
We propose an explainable geometric deep network dubbed NeuroExplainer.
NeuroExplainer is used to uncover altered infant cortical development patterns associated with preterm birth.
arXiv Detail & Related papers (2023-01-01T12:48:12Z) - RandomSCM: interpretable ensembles of sparse classifiers tailored for
omics data [59.4141628321618]
We propose an ensemble learning algorithm based on conjunctions or disjunctions of decision rules.
The interpretability of the models makes them useful for biomarker discovery and patterns discovery in high dimensional data.
arXiv Detail & Related papers (2022-08-11T13:55:04Z) - Towards the Explanation of Graph Neural Networks in Digital Pathology
with Information Flows [67.23405590815602]
Graph Neural Networks (GNNs) are widely adopted in digital pathology.
Existing explainers discover an explanatory subgraph relevant to the prediction.
An explanatory subgraph should be not only necessary for prediction, but also sufficient to uncover the most predictive regions.
We propose IFEXPLAINER, which generates a necessary and sufficient explanation for GNNs.
arXiv Detail & Related papers (2021-12-18T10:19:01Z) - A Novel Interaction-based Methodology Towards Explainable AI with Better
Understanding of Pneumonia Chest X-ray Images [0.0]
This paper proposes an interaction-based methodology -- Influence Score (I-score) -- to screen out the noisy and non-informative variables in the images.
We apply the proposed method on a real world application in Pneumonia Chest X-ray Image data set and produced state-of-the-art results.
arXiv Detail & Related papers (2021-04-19T23:02:43Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.