Latent Graph Inference with Limited Supervision
- URL: http://arxiv.org/abs/2310.04314v2
- Date: Mon, 18 Dec 2023 04:54:32 GMT
- Title: Latent Graph Inference with Limited Supervision
- Authors: Jianglin Lu, Yi Xu, Huan Wang, Yue Bai, Yun Fu
- Abstract summary: Latent graph inference (LGI) aims to jointly learn the underlying graph structure and node representations from data features.
Existing LGI methods commonly suffer from the issue of supervision starvation, where massive edge weights are learned without semantic supervision and do not contribute to the training loss.
In this paper, we observe that this issue is actually caused by the graph sparsification operation, which severely destroys the important connections established between pivotal nodes and labeled ones.
- Score: 58.54674649232757
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Latent graph inference (LGI) aims to jointly learn the underlying graph
structure and node representations from data features. However, existing LGI
methods commonly suffer from the issue of supervision starvation, where massive
edge weights are learned without semantic supervision and do not contribute to
the training loss. Consequently, these supervision-starved weights, which may
determine the predictions of testing samples, cannot be semantically optimal,
resulting in poor generalization. In this paper, we observe that this issue is
actually caused by the graph sparsification operation, which severely destroys
the important connections established between pivotal nodes and labeled ones.
To address this, we propose to restore the corrupted affinities and replenish
the missed supervision for better LGI. The key challenge then lies in
identifying the critical nodes and recovering the corrupted affinities. We
begin by defining the pivotal nodes as $k$-hop starved nodes, which can be
identified based on a given adjacency matrix. Considering the high
computational burden, we further present a more efficient alternative inspired
by CUR matrix decomposition. Subsequently, we eliminate the starved nodes by
reconstructing the destroyed connections. Extensive experiments on
representative benchmarks demonstrate that reducing the starved nodes
consistently improves the performance of state-of-the-art LGI methods,
especially under extremely limited supervision (6.12% improvement on Pubmed
with a labeling rate of only 0.3%).
Related papers
- Generative Semi-supervised Graph Anomaly Detection [42.02691404704764]
This work considers a practical semi-supervised graph anomaly detection (GAD) scenario, where part of the nodes in a graph are known to be normal.
We propose a novel Generative GAD approach (namely GGAD) for the semi-supervised scenario to better exploit the normal nodes.
GGAD is designed to leverage two important priors about the anomaly nodes -- asymmetric local affinity and egocentric closeness.
arXiv Detail & Related papers (2024-02-19T06:55:50Z) - Multitask Active Learning for Graph Anomaly Detection [48.690169078479116]
We propose a novel MultItask acTIve Graph Anomaly deTEction framework, namely MITIGATE.
By coupling node classification tasks, MITIGATE obtains the capability to detect out-of-distribution nodes without known anomalies.
Empirical studies on four datasets demonstrate that MITIGATE significantly outperforms the state-of-the-art methods for anomaly detection.
arXiv Detail & Related papers (2024-01-24T03:43:45Z) - KITS: Inductive Spatio-Temporal Kriging with Increment Training Strategy [22.457676652258087]
Kriging is the tailored task to infer the unobserved nodes (without sensors) using the observed source nodes (with sensors)
We present a novel Increment training strategy: instead of nodes (and reconstructing them), we add virtual nodes into the training graph so as to the graph gap masking issue naturally.
We name our new Kriging model with Increment Training Strategy as KITS.
arXiv Detail & Related papers (2023-11-05T04:43:48Z) - BOURNE: Bootstrapped Self-supervised Learning Framework for Unified
Graph Anomaly Detection [50.26074811655596]
We propose a novel unified graph anomaly detection framework based on bootstrapped self-supervised learning (named BOURNE)
By swapping the context embeddings between nodes and edges, we enable the mutual detection of node and edge anomalies.
BOURNE can eliminate the need for negative sampling, thereby enhancing its efficiency in handling large graphs.
arXiv Detail & Related papers (2023-07-28T00:44:57Z) - OrthoReg: Improving Graph-regularized MLPs via Orthogonality
Regularization [66.30021126251725]
Graph Neural Networks (GNNs) are currently dominating in modeling graphstructure data.
Graph-regularized networks (GR-MLPs) implicitly inject the graph structure information into model weights, while their performance can hardly match that of GNNs in most tasks.
We show that GR-MLPs suffer from dimensional collapse, a phenomenon in which the largest a few eigenvalues dominate the embedding space.
We propose OrthoReg, a novel GR-MLP model to mitigate the dimensional collapse issue.
arXiv Detail & Related papers (2023-01-31T21:20:48Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - A Systematic Evaluation of Node Embedding Robustness [77.29026280120277]
We assess the empirical robustness of node embedding models to random and adversarial poisoning attacks.
We compare edge addition, deletion and rewiring strategies computed using network properties as well as node labels.
We found that node classification suffers from higher performance degradation as opposed to network reconstruction.
arXiv Detail & Related papers (2022-09-16T17:20:23Z) - Structure-Aware Hard Negative Mining for Heterogeneous Graph Contrastive
Learning [21.702342154458623]
This work investigates Contrastive Learning (CL) on Graph Neural Networks (GNNs)
We first generate multiple semantic views according to metapaths and network schemas.
We then push node embeddings corresponding to different semantic views close to each other (positives) and pulling other embeddings apart (negatives)
Considering the complex graph structure and the smoothing nature of GNNs, we propose a structure-aware hard negative mining scheme.
arXiv Detail & Related papers (2021-08-31T14:44:49Z) - Unsupervised Deep Manifold Attributed Graph Embedding [33.1202078188891]
We propose a novel graph embedding framework named Deep Manifold Attributed Graph Embedding (DMAGE)
A node-to-node geodesic similarity is proposed to compute the inter-node similarity between the data space and the latent space.
We then design a new network structure with fewer aggregation to alleviate the oversmoothing problem.
arXiv Detail & Related papers (2021-04-27T08:47:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.