Cooperative Classification and Rationalization for Graph Generalization
- URL: http://arxiv.org/abs/2403.06239v1
- Date: Sun, 10 Mar 2024 15:38:20 GMT
- Title: Cooperative Classification and Rationalization for Graph Generalization
- Authors: Linan Yue, Qi Liu, Ye Liu, Weibo Gao, Fangzhou Yao, Wenfeng Li
- Abstract summary: We propose a Cooperative Classification and Rationalization (C2R) method, consisting of the classification and the rationalization module.
We introduce diverse training distributions using an environment-conditional generative network, enabling robust graph representations.
Finally, we infer multiple environments by gathering non-rationale representations and incorporate them into the classification module for cooperative learning.
- Score: 10.664756327958262
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have achieved impressive results in graph
classification tasks, but they struggle to generalize effectively when faced
with out-of-distribution (OOD) data. Several approaches have been proposed to
address this problem. Among them, one solution is to diversify training
distributions in vanilla classification by modifying the data environment, yet
accessing the environment information is complex. Besides, another promising
approach involves rationalization, extracting invariant rationales for
predictions. However, extracting rationales is difficult due to limited
learning signals, resulting in less accurate rationales and diminished
predictions. To address these challenges, in this paper, we propose a
Cooperative Classification and Rationalization (C2R) method, consisting of the
classification and the rationalization module. Specifically, we first assume
that multiple environments are available in the classification module. Then, we
introduce diverse training distributions using an environment-conditional
generative network, enabling robust graph representations. Meanwhile, the
rationalization module employs a separator to identify relevant rationale
subgraphs while the remaining non-rationale subgraphs are de-correlated with
labels. Next, we align graph representations from the classification module
with rationale subgraph representations using the knowledge distillation
methods, enhancing the learning signal for rationales. Finally, we infer
multiple environments by gathering non-rationale representations and
incorporate them into the classification module for cooperative learning.
Extensive experimental results on both benchmarks and synthetic datasets
demonstrate the effectiveness of C2R. Code is available at
https://github.com/yuelinan/Codes-of-C2R.
Related papers
- Two Birds with One Stone: Enhancing Uncertainty Quantification and Interpretability with Graph Functional Neural Process [27.760002432327962]
Graph neural networks (GNNs) are powerful tools on graph data.<n>However, their predictions are mis-calibrated and lack interpretability.<n>We propose a new uncertainty-aware and interpretable graph classification model.
arXiv Detail & Related papers (2025-08-23T17:48:05Z) - Improving out-of-distribution generalization in graphs via hierarchical semantic environments [5.481047026874547]
We propose a novel approach to generate hierarchical environments for each graph.
We introduce a new learning objective that guides our model to learn the diversity of environments within the same hierarchy.
Our framework achieves up to 1.29% and 2.83% improvement over the best baselines on IC50 and EC50 prediction tasks, respectively.
arXiv Detail & Related papers (2024-03-04T07:03:10Z) - PAC Learnability under Explanation-Preserving Graph Perturbations [15.83659369727204]
Graph neural networks (GNNs) operate over graphs, enabling the model to leverage the complex relationships and dependencies in graph-structured data.
A graph explanation is a subgraph which is an almost sufficient' statistic of the input graph with respect to its classification label.
This work considers two methods for leveraging such perturbation invariances in the design and training of GNNs.
arXiv Detail & Related papers (2024-02-07T17:23:15Z) - Fine-grained Graph Rationalization [51.293401030058085]
We propose fine-grained graph rationalization (FIG) for graph machine learning.
Our idea is driven by the self-attention mechanism, which provides rich interactions between input nodes.
Our experiments involve 7 real-world datasets, and the proposed FIG shows significant performance advantages compared to 13 baseline methods.
arXiv Detail & Related papers (2023-12-13T02:56:26Z) - Graph Out-of-Distribution Generalization with Controllable Data
Augmentation [51.17476258673232]
Graph Neural Network (GNN) has demonstrated extraordinary performance in classifying graph properties.
Due to the selection bias of training and testing data, distribution deviation is widespread.
We propose OOD calibration to measure the distribution deviation of virtual samples.
arXiv Detail & Related papers (2023-08-16T13:10:27Z) - Few-Shot Non-Parametric Learning with Deep Latent Variable Model [50.746273235463754]
We propose Non-Parametric learning by Compression with Latent Variables (NPC-LV)
NPC-LV is a learning framework for any dataset with abundant unlabeled data but very few labeled ones.
We show that NPC-LV outperforms supervised methods on all three datasets on image classification in low data regime.
arXiv Detail & Related papers (2022-06-23T09:35:03Z) - Graph Rationalization with Environment-based Augmentations [17.733488328772943]
Rationale identification has improved the generalizability and interpretability of neural networks on vision and language data.
Existing graph pooling and/or distribution intervention methods suffer from lack of examples to learn to identify optimal graph rationales.
We introduce a new augmentation operation called environment replacement that automatically creates virtual data examples to improve rationale identification.
arXiv Detail & Related papers (2022-06-06T20:23:30Z) - Discovering Invariant Rationales for Graph Neural Networks [104.61908788639052]
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input graph's features.
We propose a new strategy of discovering invariant rationale (DIR) to construct intrinsically interpretable GNNs.
arXiv Detail & Related papers (2022-01-30T16:43:40Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z) - Structured Graph Learning for Clustering and Semi-supervised
Classification [74.35376212789132]
We propose a graph learning framework to preserve both the local and global structure of data.
Our method uses the self-expressiveness of samples to capture the global structure and adaptive neighbor approach to respect the local structure.
Our model is equivalent to a combination of kernel k-means and k-means methods under certain condition.
arXiv Detail & Related papers (2020-08-31T08:41:20Z) - Solving Long-tailed Recognition with Deep Realistic Taxonomic Classifier [68.38233199030908]
Long-tail recognition tackles the natural non-uniformly distributed data in realworld scenarios.
While moderns perform well on populated classes, its performance degrades significantly on tail classes.
Deep-RTC is proposed as a new solution to the long-tail problem, combining realism with hierarchical predictions.
arXiv Detail & Related papers (2020-07-20T05:57:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.