Propagation with Adaptive Mask then Training for Node Classification on
Attributed Networks
- URL: http://arxiv.org/abs/2206.10142v2
- Date: Thu, 23 Jun 2022 05:18:57 GMT
- Title: Propagation with Adaptive Mask then Training for Node Classification on
Attributed Networks
- Authors: Jinsong Chen, Boyu Li, Qiuting He, Kun He
- Abstract summary: node classification on attributed networks is a semi-supervised task that is crucial for network analysis.
We propose a new method called the itshape propagation with Adaptive Mask then Training (PAMT)
The key idea is to integrate the attribute similarity mask into the structure-aware propagation process.
In this way, PAMT could preserve the correlation of the attribute of adjacent nodes during the propagation and effectively reduce the influence of structure noise.
- Score: 10.732648536892377
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Node classification on attributed networks is a semi-supervised task that is
crucial for network analysis. By decoupling two critical operations in Graph
Convolutional Networks (GCNs), namely feature transformation and neighborhood
aggregation, some recent works of decoupled GCNs could support the information
to propagate deeper and achieve advanced performance. However, they follow the
traditional structure-aware propagation strategy of GCNs, making it hard to
capture the attribute correlation of nodes and sensitive to the structure noise
described by edges whose two endpoints belong to different categories. To
address these issues, we propose a new method called the itshape Propagation
with Adaptive Mask then Training (PAMT). The key idea is to integrate the
attribute similarity mask into the structure-aware propagation process. In this
way, PAMT could preserve the attribute correlation of adjacent nodes during the
propagation and effectively reduce the influence of structure noise. Moreover,
we develop an iterative refinement mechanism to update the similarity mask
during the training process for improving the training performance. Extensive
experiments on four real-world datasets demonstrate the superior performance
and robustness of PAMT.
Related papers
- AdaRC: Mitigating Graph Structure Shifts during Test-Time [66.40525136929398]
Test-time adaptation (TTA) has attracted attention due to its ability to adapt a pre-trained model to a target domain without re-accessing the source domain.
We propose AdaRC, an innovative framework designed for effective and efficient adaptation to structure shifts in graphs.
arXiv Detail & Related papers (2024-10-09T15:15:40Z) - CMFDFormer: Transformer-based Copy-Move Forgery Detection with Continual
Learning [52.72888626663642]
Copy-move forgery detection aims at detecting duplicated regions in a suspected forged image.
Deep learning based copy-move forgery detection methods are in the ascendant.
We propose a Transformer-style copy-move forgery network named as CMFDFormer.
We also provide a novel PCSD continual learning framework to help CMFDFormer handle new tasks.
arXiv Detail & Related papers (2023-11-22T09:27:46Z) - Domain-adaptive Message Passing Graph Neural Network [67.35534058138387]
Cross-network node classification (CNNC) aims to classify nodes in a label-deficient target network by transferring the knowledge from a source network with abundant labels.
We propose a domain-adaptive message passing graph neural network (DM-GNN), which integrates graph neural network (GNN) with conditional adversarial domain adaptation.
arXiv Detail & Related papers (2023-08-31T05:26:08Z) - Interpolation-based Correlation Reduction Network for Semi-Supervised
Graph Learning [49.94816548023729]
We propose a novel graph contrastive learning method, termed Interpolation-based Correlation Reduction Network (ICRN)
In our method, we improve the discriminative capability of the latent feature by enlarging the margin of decision boundaries.
By combining the two settings, we extract rich supervision information from both the abundant unlabeled nodes and the rare yet valuable labeled nodes for discnative representation learning.
arXiv Detail & Related papers (2022-06-06T14:26:34Z) - supervised adptive threshold network for instance segmentation [4.347876036795798]
Mask R-CNN method based on adaptive threshold.
layered adaptive network structure.
adaptive feature pool.
Experiments on benchmark data sets indicate that the effectiveness of the proposed model.
arXiv Detail & Related papers (2021-06-07T09:25:44Z) - Learning to Relate Depth and Semantics for Unsupervised Domain
Adaptation [87.1188556802942]
We present an approach for encoding visual task relationships to improve model performance in an Unsupervised Domain Adaptation (UDA) setting.
We propose a novel Cross-Task Relation Layer (CTRL), which encodes task dependencies between the semantic and depth predictions.
Furthermore, we propose an Iterative Self-Learning (ISL) training scheme, which exploits semantic pseudo-labels to provide extra supervision on the target domain.
arXiv Detail & Related papers (2021-05-17T13:42:09Z) - Variational Co-embedding Learning for Attributed Network Clustering [30.7006907516984]
Recent works for attributed network clustering utilize graph convolution to obtain node embeddings and simultaneously perform clustering assignments on the embedding space.
We propose a variational co-embedding learning model for attributed network clustering (ANC)
ANC is composed of dual variational auto-encoders to simultaneously embed nodes and attributes.
arXiv Detail & Related papers (2021-04-15T08:11:47Z) - Faster Convergence in Deep-Predictive-Coding Networks to Learn Deeper
Representations [12.716429755564821]
Deep-predictive-coding networks (DPCNs) are hierarchical, generative models that rely on feed-forward and feed-back connections.
A crucial element of DPCNs is a forward-backward inference procedure to uncover sparse states of a dynamic model.
We propose an optimization strategy, with better empirical and theoretical convergence, based on accelerated proximal gradients.
arXiv Detail & Related papers (2021-01-18T02:30:13Z) - Unifying Homophily and Heterophily Network Transformation via Motifs [20.45207959265955]
Higher-order proximity (HOP) is fundamental for most network embedding methods.
We propose homophily and heterophliy preserving network transformation (H2NT) to capture HOP that flexibly unifies homophily and heterophily.
H2NT can be used as an enhancer to be integrated with any existing network embedding methods without requiring any changes to latter methods.
arXiv Detail & Related papers (2020-12-21T15:03:18Z) - Self-Challenging Improves Cross-Domain Generalization [81.99554996975372]
Convolutional Neural Networks (CNN) conduct image classification by activating dominant features that correlated with labels.
We introduce a simple training, Self-Challenging Representation (RSC), that significantly improves the generalization of CNN to the out-of-domain data.
RSC iteratively challenges the dominant features activated on the training data, and forces the network to activate remaining features that correlates with labels.
arXiv Detail & Related papers (2020-07-05T21:42:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.