Graph Rationalization with Environment-based Augmentations
- URL: http://arxiv.org/abs/2206.02886v1
- Date: Mon, 6 Jun 2022 20:23:30 GMT
- Title: Graph Rationalization with Environment-based Augmentations
- Authors: Gang Liu, Tong Zhao, Jiaxin Xu, Tengfei Luo, Meng Jiang
- Abstract summary: Rationale identification has improved the generalizability and interpretability of neural networks on vision and language data.
Existing graph pooling and/or distribution intervention methods suffer from lack of examples to learn to identify optimal graph rationales.
We introduce a new augmentation operation called environment replacement that automatically creates virtual data examples to improve rationale identification.
- Score: 17.733488328772943
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Rationale is defined as a subset of input features that best explains or
supports the prediction by machine learning models. Rationale identification
has improved the generalizability and interpretability of neural networks on
vision and language data. In graph applications such as molecule and polymer
property prediction, identifying representative subgraph structures named as
graph rationales plays an essential role in the performance of graph neural
networks. Existing graph pooling and/or distribution intervention methods
suffer from lack of examples to learn to identify optimal graph rationales. In
this work, we introduce a new augmentation operation called environment
replacement that automatically creates virtual data examples to improve
rationale identification. We propose an efficient framework that performs
rationale-environment separation and representation learning on the real and
augmented examples in latent spaces to avoid the high complexity of explicit
graph decoding and encoding. Comparing against recent techniques, experiments
on seven molecular and four polymer real datasets demonstrate the effectiveness
and efficiency of the proposed augmentation-based graph rationalization
framework.
Related papers
- Incremental Learning with Concept Drift Detection and Prototype-based Embeddings for Graph Stream Classification [11.811637154674939]
This work introduces a novel method for graph stream classification.
It operates under the general setting where a data generating process produces graphs with varying nodes and edges over time.
It incorporates a loss-based concept drift detection mechanism to recalculate graph prototypes when drift is detected.
arXiv Detail & Related papers (2024-04-03T08:47:32Z) - Bures-Wasserstein Means of Graphs [60.42414991820453]
We propose a novel framework for defining a graph mean via embeddings in the space of smooth graph signal distributions.
By finding a mean in this embedding space, we can recover a mean graph that preserves structural information.
We establish the existence and uniqueness of the novel graph mean, and provide an iterative algorithm for computing it.
arXiv Detail & Related papers (2023-05-31T11:04:53Z) - Robust Causal Graph Representation Learning against Confounding Effects [21.380907101361643]
We propose Robust Causal Graph Representation Learning (RCGRL) to learn robust graph representations against confounding effects.
RCGRL introduces an active approach to generate instrumental variables under unconditional moment restrictions, which empowers the graph representation learning model to eliminate confounders.
arXiv Detail & Related papers (2022-08-18T01:31:25Z) - Let Invariant Rationale Discovery Inspire Graph Contrastive Learning [98.10268114789775]
We argue that a high-performing augmentation should preserve the salient semantics of anchor graphs regarding instance-discrimination.
We propose a new framework, Rationale-aware Graph Contrastive Learning (RGCL)
RGCL uses a rationale generator to reveal salient features about graph instance-discrimination as the rationale, and then creates rationale-aware views for contrastive learning.
arXiv Detail & Related papers (2022-06-16T01:28:40Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Unbiased Graph Embedding with Biased Graph Observations [52.82841737832561]
We propose a principled new way for obtaining unbiased representations by learning from an underlying bias-free graph.
Based on this new perspective, we propose two complementary methods for uncovering such an underlying graph.
arXiv Detail & Related papers (2021-10-26T18:44:37Z) - SUGAR: Subgraph Neural Network with Reinforcement Pooling and
Self-Supervised Mutual Information Mechanism [33.135006052347194]
This paper presents a novel hierarchical subgraph-level selection and embedding based graph neural network for graph classification, namely SUGAR.
SUGAR reconstructs a sketched graph by extracting striking subgraphs as the representative part of the original graph to reveal subgraph-level patterns.
To differentiate subgraph representations among graphs, we present a self-supervised mutual information mechanism to encourage subgraph embedding.
arXiv Detail & Related papers (2021-01-20T15:06:16Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - GraphOpt: Learning Optimization Models of Graph Formation [72.75384705298303]
We propose an end-to-end framework that learns an implicit model of graph structure formation and discovers an underlying optimization mechanism.
The learned objective can serve as an explanation for the observed graph properties, thereby lending itself to transfer across different graphs within a domain.
GraphOpt poses link formation in graphs as a sequential decision-making process and solves it using maximum entropy inverse reinforcement learning algorithm.
arXiv Detail & Related papers (2020-07-07T16:51:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.