Adapting to Change: Robust Counterfactual Explanations in Dynamic Data
Landscapes
- URL: http://arxiv.org/abs/2308.02353v1
- Date: Fri, 4 Aug 2023 14:41:03 GMT
- Title: Adapting to Change: Robust Counterfactual Explanations in Dynamic Data
Landscapes
- Authors: Bardh Prenkaj, Mario Villaizan-Vallelado, Tobias Leemann, Gjergji
Kasneci
- Abstract summary: We introduce a novel semi-supervised Graph Counterfactual Explainer (GCE) methodology, Dynamic GRAph Counterfactual Explainer (DyGRACE)
It leverages initial knowledge about the data distribution to search for valid counterfactuals while avoiding using information from potentially outdated decision functions in subsequent time steps.
DyGRACE is quite effective and can act as a drift detector, identifying distributional drift based on differences in reconstruction errors between iterations.
- Score: 9.943459106509687
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We introduce a novel semi-supervised Graph Counterfactual Explainer (GCE)
methodology, Dynamic GRAph Counterfactual Explainer (DyGRACE). It leverages
initial knowledge about the data distribution to search for valid
counterfactuals while avoiding using information from potentially outdated
decision functions in subsequent time steps. Employing two graph autoencoders
(GAEs), DyGRACE learns the representation of each class in a binary
classification scenario. The GAEs minimise the reconstruction error between the
original graph and its learned representation during training. The method
involves (i) optimising a parametric density function (implemented as a
logistic regression function) to identify counterfactuals by maximising the
factual autoencoder's reconstruction error, (ii) minimising the counterfactual
autoencoder's error, and (iii) maximising the similarity between the factual
and counterfactual graphs. This semi-supervised approach is independent of an
underlying black-box oracle. A logistic regression model is trained on a set of
graph pairs to learn weights that aid in finding counterfactuals. At inference,
for each unseen graph, the logistic regressor identifies the best
counterfactual candidate using these learned weights, while the GAEs can be
iteratively updated to represent the continual adaptation of the learned graph
representation over iterations. DyGRACE is quite effective and can act as a
drift detector, identifying distributional drift based on differences in
reconstruction errors between iterations. It avoids reliance on the oracle's
predictions in successive iterations, thereby increasing the efficiency of
counterfactual discovery. DyGRACE, with its capacity for contrastive learning
and drift detection, will offer new avenues for semi-supervised learning and
explanation generation.
Related papers
- Do We Really Need Graph Convolution During Training? Light Post-Training Graph-ODE for Efficient Recommendation [34.93725892725111]
Graph convolution networks (GCNs) in training recommender systems (RecSys) have been persistent concerns.
This paper presents a critical examination of the necessity of graph convolutions during the training phase.
We introduce an innovative alternative: the Light Post-Training Graph Ordinary-Differential-Equation (LightGODE)
arXiv Detail & Related papers (2024-07-26T17:59:32Z) - RegExplainer: Generating Explanations for Graph Neural Networks in Regression Tasks [10.473178462412584]
We propose a novel explanation method to interpret the graph regression models (XAIG-R)
Our method addresses the distribution shifting problem and continuously ordered decision boundary issues.
We present a self-supervised learning strategy to tackle the continuously ordered labels in regression tasks.
arXiv Detail & Related papers (2023-07-15T16:16:22Z) - GIF: A General Graph Unlearning Strategy via Influence Function [63.52038638220563]
Graph Influence Function (GIF) is a model-agnostic unlearning method that can efficiently and accurately estimate parameter changes in response to a $epsilon$-mass perturbation in deleted data.
We conduct extensive experiments on four representative GNN models and three benchmark datasets to justify GIF's superiority in terms of unlearning efficacy, model utility, and unlearning efficiency.
arXiv Detail & Related papers (2023-04-06T03:02:54Z) - Counterfactual Intervention Feature Transfer for Visible-Infrared Person
Re-identification [69.45543438974963]
We find graph-based methods in the visible-infrared person re-identification task (VI-ReID) suffer from bad generalization because of two issues.
The well-trained input features weaken the learning of graph topology, making it not generalized enough during the inference process.
We propose a Counterfactual Intervention Feature Transfer (CIFT) method to tackle these problems.
arXiv Detail & Related papers (2022-08-01T16:15:31Z) - Features Based Adaptive Augmentation for Graph Contrastive Learning [0.0]
Self-Supervised learning aims to eliminate the need for expensive annotation in graph representation learning.
We introduce a Feature Based Adaptive Augmentation (FebAA) approach, which identifies and preserves potentially influential features.
We successfully improved the accuracy of GRACE and BGRL on eight graph representation learning's benchmark datasets.
arXiv Detail & Related papers (2022-07-05T03:41:20Z) - Let Invariant Rationale Discovery Inspire Graph Contrastive Learning [98.10268114789775]
We argue that a high-performing augmentation should preserve the salient semantics of anchor graphs regarding instance-discrimination.
We propose a new framework, Rationale-aware Graph Contrastive Learning (RGCL)
RGCL uses a rationale generator to reveal salient features about graph instance-discrimination as the rationale, and then creates rationale-aware views for contrastive learning.
arXiv Detail & Related papers (2022-06-16T01:28:40Z) - Graph Self-supervised Learning with Accurate Discrepancy Learning [64.69095775258164]
We propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA)
We validate our method on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which our model largely outperforms relevant baselines.
arXiv Detail & Related papers (2022-02-07T08:04:59Z) - Causal Incremental Graph Convolution for Recommender System Retraining [89.25922726558875]
Real-world recommender system needs to be regularly retrained to keep with the new data.
In this work, we consider how to efficiently retrain graph convolution network (GCN) based recommender models.
arXiv Detail & Related papers (2021-08-16T04:20:09Z) - Learning Graphs from Smooth Signals under Moment Uncertainty [23.868075779606425]
We consider the problem of inferring the graph structure from a given set of graph signals.
Traditional graph learning models do not take this distributional uncertainty into account.
arXiv Detail & Related papers (2021-05-12T06:47:34Z) - Distributed Training of Graph Convolutional Networks using Subgraph
Approximation [72.89940126490715]
We propose a training strategy that mitigates the lost information across multiple partitions of a graph through a subgraph approximation scheme.
The subgraph approximation approach helps the distributed training system converge at single-machine accuracy.
arXiv Detail & Related papers (2020-12-09T09:23:49Z) - Active Learning on Attributed Graphs via Graph Cognizant Logistic
Regression and Preemptive Query Generation [37.742218733235084]
We propose a novel graph-based active learning algorithm for the task of node classification in attributed graphs.
Our algorithm uses graph cognizant logistic regression, equivalent to a linearized graph convolutional neural network (GCN) for the prediction phase and maximizes the expected error reduction in the query phase.
We conduct experiments on five public benchmark datasets, demonstrating a significant improvement over state-of-the-art approaches.
arXiv Detail & Related papers (2020-07-09T18:00:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.