Graph Information Bottleneck
- URL: http://arxiv.org/abs/2010.12811v1
- Date: Sat, 24 Oct 2020 07:13:00 GMT
- Title: Graph Information Bottleneck
- Authors: Tailin Wu, Hongyu Ren, Pan Li, Jure Leskovec
- Abstract summary: Graph Neural Networks (GNNs) provide an expressive way to fuse information from network structure and node features.
Inheriting from the general Information Bottleneck (IB), GIB aims to learn the minimal sufficient representation for a given task.
We show that our proposed models are more robust than state-of-the-art graph defense models.
- Score: 77.21967740646784
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Representation learning of graph-structured data is challenging because both
graph structure and node features carry important information. Graph Neural
Networks (GNNs) provide an expressive way to fuse information from network
structure and node features. However, GNNs are prone to adversarial attacks.
Here we introduce Graph Information Bottleneck (GIB), an information-theoretic
principle that optimally balances expressiveness and robustness of the learned
representation of graph-structured data. Inheriting from the general
Information Bottleneck (IB), GIB aims to learn the minimal sufficient
representation for a given task by maximizing the mutual information between
the representation and the target, and simultaneously constraining the mutual
information between the representation and the input data. Different from the
general IB, GIB regularizes the structural as well as the feature information.
We design two sampling algorithms for structural regularization and instantiate
the GIB principle with two new models: GIB-Cat and GIB-Bern, and demonstrate
the benefits by evaluating the resilience to adversarial attacks. We show that
our proposed models are more robust than state-of-the-art graph defense models.
GIB-based models empirically achieve up to 31% improvement with adversarial
perturbation of the graph structure as well as node features.
Related papers
- Learning to Model Graph Structural Information on MLPs via Graph Structure Self-Contrasting [50.181824673039436]
We propose a Graph Structure Self-Contrasting (GSSC) framework that learns graph structural information without message passing.
The proposed framework is based purely on Multi-Layer Perceptrons (MLPs), where the structural information is only implicitly incorporated as prior knowledge.
It first applies structural sparsification to remove potentially uninformative or noisy edges in the neighborhood, and then performs structural self-contrasting in the sparsified neighborhood to learn robust node representations.
arXiv Detail & Related papers (2024-09-09T12:56:02Z) - MDS-GNN: A Mutual Dual-Stream Graph Neural Network on Graphs with Incomplete Features and Structure [8.00268216176428]
Graph Neural Networks (GNNs) have emerged as powerful tools for analyzing and learning representations from graph-structured data.
A crucial prerequisite for the outstanding performance of GNNs is the availability of complete graph information.
This study proposes a mutual dual-stream graph neural network (MDS-GNN) which implements a mutual benefit learning between features and structure.
arXiv Detail & Related papers (2024-08-09T03:42:56Z) - Node Classification via Semantic-Structural Attention-Enhanced Graph Convolutional Networks [0.9463895540925061]
We introduce the semantic-structural attention-enhanced graph convolutional network (SSA-GCN)
It not only models the graph structure but also extracts generalized unsupervised features to enhance classification performance.
Our experiments on the Cora and CiteSeer datasets demonstrate the performance improvements achieved by our proposed method.
arXiv Detail & Related papers (2024-03-24T06:28:54Z) - Deep Contrastive Graph Learning with Clustering-Oriented Guidance [61.103996105756394]
Graph Convolutional Network (GCN) has exhibited remarkable potential in improving graph-based clustering.
Models estimate an initial graph beforehand to apply GCN.
Deep Contrastive Graph Learning (DCGL) model is proposed for general data clustering.
arXiv Detail & Related papers (2024-02-25T07:03:37Z) - DGNN: Decoupled Graph Neural Networks with Structural Consistency
between Attribute and Graph Embedding Representations [62.04558318166396]
Graph neural networks (GNNs) demonstrate a robust capability for representation learning on graphs with complex structures.
A novel GNNs framework, dubbed Decoupled Graph Neural Networks (DGNN), is introduced to obtain a more comprehensive embedding representation of nodes.
Experimental results conducted on several graph benchmark datasets verify DGNN's superiority in node classification task.
arXiv Detail & Related papers (2024-01-28T06:43:13Z) - ENGAGE: Explanation Guided Data Augmentation for Graph Representation
Learning [34.23920789327245]
We propose ENGAGE, where explanation guides the contrastive augmentation process to preserve the key parts in graphs.
We also design two data augmentation schemes on graphs for perturbing structural and feature information, respectively.
arXiv Detail & Related papers (2023-07-03T14:33:14Z) - Graph Structure Learning with Variational Information Bottleneck [70.62851953251253]
We propose a novel Variational Information Bottleneck guided Graph Structure Learning framework, namely VIB-GSL.
VIB-GSL learns an informative and compressive graph structure to distill the actionable information for specific downstream tasks.
arXiv Detail & Related papers (2021-12-16T14:22:13Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.