Maximizing Mutual Information Across Feature and Topology Views for
Learning Graph Representations
- URL: http://arxiv.org/abs/2105.06715v1
- Date: Fri, 14 May 2021 08:49:40 GMT
- Title: Maximizing Mutual Information Across Feature and Topology Views for
Learning Graph Representations
- Authors: Xiaolong Fan, Maoguo Gong, Yue Wu, Hao Li
- Abstract summary: We propose a novel approach by exploiting mutual information across feature and topology views.
Our proposed method can achieve comparable or even better performance under the unsupervised representation and linear evaluation protocol.
- Score: 25.756202627564505
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, maximizing mutual information has emerged as a powerful method for
unsupervised graph representation learning. The existing methods are typically
effective to capture information from the topology view but ignore the feature
view. To circumvent this issue, we propose a novel approach by exploiting
mutual information maximization across feature and topology views.
Specifically, we first utilize a multi-view representation learning module to
better capture both local and global information content across feature and
topology views on graphs. To model the information shared by the feature and
topology spaces, we then develop a common representation learning module using
mutual information maximization and reconstruction loss minimization. To
explicitly encourage diversity between graph representations from the same
view, we also introduce a disagreement regularization to enlarge the distance
between representations from the same view. Experiments on synthetic and
real-world datasets demonstrate the effectiveness of integrating feature and
topology views. In particular, compared with the previous supervised methods,
our proposed method can achieve comparable or even better performance under the
unsupervised representation and linear evaluation protocol.
Related papers
- Learning Representations without Compositional Assumptions [79.12273403390311]
We propose a data-driven approach that learns feature set dependencies by representing feature sets as graph nodes and their relationships as learnable edges.
We also introduce LEGATO, a novel hierarchical graph autoencoder that learns a smaller, latent graph to aggregate information from multiple views dynamically.
arXiv Detail & Related papers (2023-05-31T10:36:10Z) - Cross-view Graph Contrastive Representation Learning on Partially
Aligned Multi-view Data [52.491074276133325]
Multi-view representation learning has developed rapidly over the past decades and has been applied in many fields.
We propose a new cross-view graph contrastive learning framework, which integrates multi-view information to align data and learn latent representations.
Experiments conducted on several real datasets demonstrate the effectiveness of the proposed method on the clustering and classification tasks.
arXiv Detail & Related papers (2022-11-08T09:19:32Z) - Towards Consistency and Complementarity: A Multiview Graph Information
Bottleneck Approach [25.40829979251883]
How to model and integrate shared (i.e. consistency) and view-specific (i.e. complementarity) information is a key issue in multiview graph analysis.
We propose a novel Multiview Variational Graph Information Bottleneck (MVGIB) principle to maximize the agreement for common representations and the disagreement for view-specific representations.
arXiv Detail & Related papers (2022-10-11T13:51:34Z) - Cross-View-Prediction: Exploring Contrastive Feature for Hyperspectral
Image Classification [9.131465469247608]
This paper presents a self-supervised feature learning method for hyperspectral image classification.
Our method tries to construct two different views of the raw hyperspectral image through a cross-representation learning method.
And then to learn semantically consistent representation over the created views by contrastive learning method.
arXiv Detail & Related papers (2022-03-14T11:07:33Z) - ACTIVE:Augmentation-Free Graph Contrastive Learning for Partial
Multi-View Clustering [52.491074276133325]
We propose an augmentation-free graph contrastive learning framework to solve the problem of partial multi-view clustering.
The proposed approach elevates instance-level contrastive learning and missing data inference to the cluster-level, effectively mitigating the impact of individual missing data on clustering.
arXiv Detail & Related papers (2022-03-01T02:32:25Z) - Effective and Efficient Graph Learning for Multi-view Clustering [173.8313827799077]
We propose an effective and efficient graph learning model for multi-view clustering.
Our method exploits the view-similar between graphs of different views by the minimization of tensor Schatten p-norm.
Our proposed algorithm is time-economical and obtains the stable results and scales well with the data size.
arXiv Detail & Related papers (2021-08-15T13:14:28Z) - Self-Supervised Graph Representation Learning via Topology
Transformations [61.870882736758624]
We present the Topology Transformation Equivariant Representation learning, a general paradigm of self-supervised learning for node representations of graph data.
In experiments, we apply the proposed model to the downstream node and graph classification tasks, and results show that the proposed method outperforms the state-of-the-art unsupervised approaches.
arXiv Detail & Related papers (2021-05-25T06:11:03Z) - Two-Level Adversarial Visual-Semantic Coupling for Generalized Zero-shot
Learning [21.89909688056478]
We propose a new two-level joint idea to augment the generative network with an inference network during training.
This provides strong cross-modal interaction for effective transfer of knowledge between visual and semantic domains.
We evaluate our approach on four benchmark datasets against several state-of-the-art methods, and show its performance.
arXiv Detail & Related papers (2020-07-15T15:34:09Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.