Graph Harmony: Denoising and Nuclear-Norm Wasserstein Adaptation for
Enhanced Domain Transfer in Graph-Structured Data
- URL: http://arxiv.org/abs/2301.12361v2
- Date: Wed, 13 Dec 2023 01:18:41 GMT
- Title: Graph Harmony: Denoising and Nuclear-Norm Wasserstein Adaptation for
Enhanced Domain Transfer in Graph-Structured Data
- Authors: Mengxi Wu and Mohammad Rostami
- Abstract summary: We develop the Denoising and Nuclear-Norm Wasserstein Adaptation Network (DNAN)
DNAN employs the Nuclear-norm Wasserstein discrepancy (NWD), which can simultaneously achieve domain alignment and class distinguishment.
Our comprehensive experiments demonstrate that DNAN outperforms state-of-the-art methods on standard UDA benchmarks for graph classification.
- Score: 23.871860648919593
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph-structured data can be found in numerous domains, yet the scarcity of
labeled instances hinders its effective utilization of deep learning in many
scenarios. Traditional unsupervised domain adaptation (UDA) strategies for
graphs primarily hinge on adversarial learning and pseudo-labeling. These
approaches fail to effectively leverage graph discriminative features, leading
to class mismatching and unreliable label quality. To navigate these obstacles,
we develop the Denoising and Nuclear-Norm Wasserstein Adaptation Network
(DNAN). DNAN employs the Nuclear-norm Wasserstein discrepancy (NWD), which can
simultaneously achieve domain alignment and class distinguishment. DANA also
integrates a denoising mechanism via a variational graph autoencoder that
mitigates data noise. This denoising mechanism helps capture essential features
of both source and target domains, improving the robustness of the domain
adaptation process. Our comprehensive experiments demonstrate that DNAN
outperforms state-of-the-art methods on standard UDA benchmarks for graph
classification.
Related papers
- Nested Graph Pseudo-Label Refinement for Noisy Label Domain Adaptation Learning [9.190820361516415]
Nested Graph Pseudo-Label Refinement (NeGPR) is a novel framework tailored for graph-level domain adaptation with noisy labels.<n>NeGPR consistently outperforms state-of-the-art methods under severe label noise, achieving gains of up to 12.7% in accuracy.
arXiv Detail & Related papers (2025-08-01T15:32:40Z) - Bridging Domain Adaptation and Graph Neural Networks: A Tensor-Based Framework for Effective Label Propagation [23.79865440689265]
Graph Neural Networks (GNNs) have recently become the predominant tools for studying graph data.
Despite state-of-the-art performance on graph classification tasks, GNNs are overwhelmingly trained in a single domain under supervision.
We propose the Label-Propagation Graph Neural Network (LP-TGNN) framework to bridge the gap between graph data and traditional domain adaptation methods.
arXiv Detail & Related papers (2025-02-12T15:36:38Z) - SIDDA: SInkhorn Dynamic Domain Adaptation for Image Classification with Equivariant Neural Networks [37.69303106863453]
SIDDA is an out-of-the-box DA training algorithm built upon the Sinkhorn divergence.
We find that SIDDA enhances the generalization capabilities of NNs.
We also study the efficacy of SIDDA on ENNs with respect to the varying group orders of the dihedral group $D_N$.
arXiv Detail & Related papers (2025-01-23T19:29:34Z) - Rank and Align: Towards Effective Source-free Graph Domain Adaptation [16.941755478093153]
Graph neural networks (GNNs) have achieved impressive performance in graph domain adaptation.
However, extensive source graphs could be unavailable in real-world scenarios due to privacy and storage concerns.
We introduce a novel GNN-based approach called Rank and Align (RNA), which ranks graph similarities with spectral seriation for robust semantics learning.
arXiv Detail & Related papers (2024-08-22T08:00:50Z) - DFA-GNN: Forward Learning of Graph Neural Networks by Direct Feedback Alignment [57.62885438406724]
Graph neural networks are recognized for their strong performance across various applications.
BP has limitations that challenge its biological plausibility and affect the efficiency, scalability and parallelism of training neural networks for graph-based tasks.
We propose DFA-GNN, a novel forward learning framework tailored for GNNs with a case study of semi-supervised learning.
arXiv Detail & Related papers (2024-06-04T07:24:51Z) - ERASE: Error-Resilient Representation Learning on Graphs for Label Noise
Tolerance [53.73316938815873]
We propose a method called ERASE (Error-Resilient representation learning on graphs for lAbel noiSe tolerancE) to learn representations with error tolerance.
ERASE combines prototype pseudo-labels with propagated denoised labels and updates representations with error resilience.
Our method can outperform multiple baselines with clear margins in broad noise levels and enjoy great scalability.
arXiv Detail & Related papers (2023-12-13T17:59:07Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Source Free Unsupervised Graph Domain Adaptation [60.901775859601685]
Unsupervised Graph Domain Adaptation (UGDA) shows its practical value of reducing the labeling cost for node classification.
Most existing UGDA methods heavily rely on the labeled graph in the source domain.
In some real-world scenarios, the source graph is inaccessible because of privacy issues.
We propose a novel scenario named Source Free Unsupervised Graph Domain Adaptation (SFUGDA)
arXiv Detail & Related papers (2021-12-02T03:18:18Z) - Distributionally Robust Semi-Supervised Learning Over Graphs [68.29280230284712]
Semi-supervised learning (SSL) over graph-structured data emerges in many network science applications.
To efficiently manage learning over graphs, variants of graph neural networks (GNNs) have been developed recently.
Despite their success in practice, most of existing methods are unable to handle graphs with uncertain nodal attributes.
Challenges also arise due to distributional uncertainties associated with data acquired by noisy measurements.
A distributionally robust learning framework is developed, where the objective is to train models that exhibit quantifiable robustness against perturbations.
arXiv Detail & Related papers (2021-10-20T14:23:54Z) - Local Augmentation for Graph Neural Networks [78.48812244668017]
We introduce the local augmentation, which enhances node features by its local subgraph structures.
Based on the local augmentation, we further design a novel framework: LA-GNN, which can apply to any GNN models in a plug-and-play manner.
arXiv Detail & Related papers (2021-09-08T18:10:08Z) - Adaptive Pseudo-Label Refinement by Negative Ensemble Learning for
Source-Free Unsupervised Domain Adaptation [35.728603077621564]
Existing Unsupervised Domain Adaptation (UDA) methods presumes source and target domain data to be simultaneously available during training.
A pre-trained source model is always considered to be available, even though performing poorly on target due to the well-known domain shift problem.
We propose a unified method to tackle adaptive noise filtering and pseudo-label refinement.
arXiv Detail & Related papers (2021-03-29T22:18:34Z) - MetaCorrection: Domain-aware Meta Loss Correction for Unsupervised
Domain Adaptation in Semantic Segmentation [14.8840510432657]
Unsupervised domain adaptation (UDA) aims to transfer the knowledge from the labeled source domain to the unlabeled target domain.
Existing self-training based UDA approaches assign pseudo labels for target data and treat them as ground truth labels.
generated pseudo labels from the model optimized on the source domain inevitably contain noise due to the domain gap.
arXiv Detail & Related papers (2021-03-09T06:57:03Z) - Unified Robust Training for Graph NeuralNetworks against Label Noise [12.014301020294154]
We propose a new framework, UnionNET, for learning with noisy labels on graphs under a semi-supervised setting.
Our approach provides a unified solution for robustly training GNNs and performing label correction simultaneously.
arXiv Detail & Related papers (2021-03-05T01:17:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.