Beyond Redundancy: Information-aware Unsupervised Multiplex Graph
Structure Learning
- URL: http://arxiv.org/abs/2409.17386v1
- Date: Wed, 25 Sep 2024 22:00:26 GMT
- Title: Beyond Redundancy: Information-aware Unsupervised Multiplex Graph
Structure Learning
- Authors: Zhixiang Shen, Shuo Wang, Zhao Kang
- Abstract summary: Unsupervised Multiplex Graph Learning (UMGL) aims to learn node representations on various edge types without manual labeling.
In this paper, we focus on a more realistic and challenging task: to unsupervisedly learn a fused graph from multiple graphs.
Specifically, our proposed Information-aware Unsupervised Multiplex Graph Fusion framework (InfoMGF) uses graph structure refinement to eliminate irrelevant noise.
- Score: 12.138893216674457
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised Multiplex Graph Learning (UMGL) aims to learn node
representations on various edge types without manual labeling. However,
existing research overlooks a key factor: the reliability of the graph
structure. Real-world data often exhibit a complex nature and contain abundant
task-irrelevant noise, severely compromising UMGL's performance. Moreover,
existing methods primarily rely on contrastive learning to maximize mutual
information across different graphs, limiting them to multiplex graph redundant
scenarios and failing to capture view-unique task-relevant information. In this
paper, we focus on a more realistic and challenging task: to unsupervisedly
learn a fused graph from multiple graphs that preserve sufficient task-relevant
information while removing task-irrelevant noise. Specifically, our proposed
Information-aware Unsupervised Multiplex Graph Fusion framework (InfoMGF) uses
graph structure refinement to eliminate irrelevant noise and simultaneously
maximizes view-shared and view-unique task-relevant information, thereby
tackling the frontier of non-redundant multiplex graph. Theoretical analyses
further guarantee the effectiveness of InfoMGF. Comprehensive experiments
against various baselines on different downstream tasks demonstrate its
superior performance and robustness. Surprisingly, our unsupervised method even
beats the sophisticated supervised approaches. The source code and datasets are
available at https://github.com/zxlearningdeep/InfoMGF.
Related papers
- InstructG2I: Synthesizing Images from Multimodal Attributed Graphs [50.852150521561676]
We propose a graph context-conditioned diffusion model called InstructG2I.
InstructG2I first exploits the graph structure and multimodal information to conduct informative neighbor sampling.
A Graph-QFormer encoder adaptively encodes the graph nodes into an auxiliary set of graph prompts to guide the denoising process.
arXiv Detail & Related papers (2024-10-09T17:56:15Z) - Representation learning in multiplex graphs: Where and how to fuse
information? [5.0235828656754915]
Multiplex graphs possess richer information, provide better modeling capabilities and integrate more detailed data from potentially different sources.
In this paper, we tackle the problem of learning representations for nodes in multiplex networks in an unsupervised or self-supervised manner.
We propose improvements in how to construct GNN architectures that deal with multiplex graphs.
arXiv Detail & Related papers (2024-02-27T21:47:06Z) - MGNet: Learning Correspondences via Multiple Graphs [78.0117352211091]
Learning correspondences aims to find correct correspondences from the initial correspondence set with an uneven correspondence distribution and a low inlier rate.
Recent advances usually use graph neural networks (GNNs) to build a single type of graph or stack local graphs into the global one to complete the task.
We propose MGNet to effectively combine multiple complementary graphs.
arXiv Detail & Related papers (2024-01-10T07:58:44Z) - Unsupervised Multiplex Graph Learning with Complementary and Consistent
Information [20.340977728674698]
Unsupervised multiplex graph learning (UMGL) has been shown to achieve significant effectiveness for different downstream tasks.
Previous methods usually overlook the issues in practical applications, i.e., the out-of-sample issue and the noise issue.
We propose an effective and efficient method to explore both complementary and consistent information.
arXiv Detail & Related papers (2023-08-03T08:24:08Z) - Learning Strong Graph Neural Networks with Weak Information [64.64996100343602]
We develop a principled approach to the problem of graph learning with weak information (GLWI)
We propose D$2$PT, a dual-channel GNN framework that performs long-range information propagation on the input graph with incomplete structure, but also on a global graph that encodes global semantic similarities.
arXiv Detail & Related papers (2023-05-29T04:51:09Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Node Feature Extraction by Self-Supervised Multi-scale Neighborhood
Prediction [123.20238648121445]
We propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT)
GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information.
We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets.
arXiv Detail & Related papers (2021-10-29T19:55:12Z) - Multi-Level Graph Contrastive Learning [38.022118893733804]
We propose a Multi-Level Graph Contrastive Learning (MLGCL) framework for learning robust representation of graph data by contrasting space views of graphs.
The original graph is first-order approximation structure and contains uncertainty or error, while the $k$NN graph generated by encoding features preserves high-order proximity.
Extensive experiments indicate MLGCL achieves promising results compared with the existing state-of-the-art graph representation learning methods on seven datasets.
arXiv Detail & Related papers (2021-07-06T14:24:43Z) - Multiple Graph Learning for Scalable Multi-view Clustering [26.846642220480863]
We propose an efficient multiple graph learning model via a small number of anchor points and tensor Schatten p-norm minimization.
Specifically, we construct a hidden and tractable large graph by anchor graph for each view.
We develop an efficient algorithm, which scales linearly with the data size, to solve our proposed model.
arXiv Detail & Related papers (2021-06-29T13:10:56Z) - Learnable Graph Matching: Incorporating Graph Partitioning with Deep
Feature Learning for Multiple Object Tracking [58.30147362745852]
Data association across frames is at the core of Multiple Object Tracking (MOT) task.
Existing methods mostly ignore the context information among tracklets and intra-frame detections.
We propose a novel learnable graph matching method to address these issues.
arXiv Detail & Related papers (2021-03-30T08:58:45Z) - Sub-graph Contrast for Scalable Self-Supervised Graph Representation
Learning [21.0019144298605]
Existing graph neural networks fed with the complete graph data are not scalable due to limited computation and memory costs.
textscSubg-Con is proposed by utilizing the strong correlation between central nodes and their sampled subgraphs to capture regional structure information.
Compared with existing graph representation learning approaches, textscSubg-Con has prominent performance advantages in weaker supervision requirements, model learning scalability, and parallelization.
arXiv Detail & Related papers (2020-09-22T01:58:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.