EGC2: Enhanced Graph Classification with Easy Graph Compression
- URL: http://arxiv.org/abs/2107.07737v1
- Date: Fri, 16 Jul 2021 07:17:29 GMT
- Title: EGC2: Enhanced Graph Classification with Easy Graph Compression
- Authors: Jinyin Chen, Dunjie Zhang, Zhaoyan Ming, Mingwei Jia, and Yi Liu
- Abstract summary: We propose EGC$2$, an enhanced graph classification model with easy graph compression.
EGC$2$ captures the relationship between features of different nodes by constructing feature graphs and improving aggregate node-level representation.
Experiments on seven benchmark datasets demonstrate that the proposed feature read-out and graph compression mechanisms enhance the robustness of various basic models.
- Score: 3.599345724913102
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph classification plays a significant role in network analysis. It also
faces potential security threat like adversarial attacks. Some defense methods
may sacrifice algorithm complexity for robustness like adversarial training,
while others may sacrifice the clean example performance such as
smoothing-based defense. Most of them are suffered from high-complexity or less
transferability. To address this problem, we proposed EGC$^2$, an enhanced
graph classification model with easy graph compression. EGC$^2$ captures the
relationship between features of different nodes by constructing feature graphs
and improving aggregate node-level representation. To achieve lower complexity
defense applied to various graph classification models, EGC$^2$ utilizes a
centrality-based edge importance index to compress graphs, filtering out
trivial structures and even adversarial perturbations of the input graphs, thus
improves its robustness. Experiments on seven benchmark datasets demonstrate
that the proposed feature read-out and graph compression mechanisms enhance the
robustness of various basic models, thus achieving the state-of-the-art
performance of accuracy and robustness in the threat of different adversarial
attacks.
Related papers
- RobGC: Towards Robust Graph Condensation [61.259453496191696]
Graph neural networks (GNNs) have attracted widespread attention for their impressive capability of graph representation learning.
However, the increasing prevalence of large-scale graphs presents a significant challenge for GNN training due to their computational demands.
We propose graph condensation (GC) to generate an informative compact graph that enables efficient training of GNNs while retaining performance.
arXiv Detail & Related papers (2024-06-19T04:14:57Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - Similarity Preserving Adversarial Graph Contrastive Learning [5.671825576834061]
We propose a similarity-preserving adversarial graph contrastive learning framework.
In this paper, we show that SP-AGCL achieves a competitive performance on several downstream tasks.
arXiv Detail & Related papers (2023-06-24T04:02:50Z) - Structure-free Graph Condensation: From Large-scale Graphs to Condensed
Graph-free Data [91.27527985415007]
Existing graph condensation methods rely on the joint optimization of nodes and structures in the condensed graph.
We advocate a new Structure-Free Graph Condensation paradigm, named SFGC, to distill a large-scale graph into a small-scale graph node set.
arXiv Detail & Related papers (2023-06-05T07:53:52Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Graph Pooling with Maximum-Weight $k$-Independent Sets [12.251091325930837]
We introduce a graph coarsening mechanism based on the graph-theoretic concept of maximum-weight $k$-independent sets.
We prove theoretical guarantees for distortion bounds on path lengths, as well as the ability to preserve key topological properties in the coarsened graphs.
arXiv Detail & Related papers (2022-08-06T14:12:47Z) - Reliable Representations Make A Stronger Defender: Unsupervised
Structure Refinement for Robust GNN [36.045702771828736]
Graph Neural Networks (GNNs) have been successful on flourish tasks over graph data.
Recent studies have shown that attackers can catastrophically degrade the performance of GNNs by maliciously modifying the graph structure.
We propose an unsupervised pipeline, named STABLE, to optimize the graph structure.
arXiv Detail & Related papers (2022-06-30T10:02:32Z) - Exploring High-Order Structure for Robust Graph Structure Learning [33.62223306095631]
Graph Neural Networks (GNNs) are vulnerable to adversarial attack, i.e., an imperceptible structure perturbation can fool GNNs to make wrong predictions.
In this paper, we analyze the adversarial attack on graphs from the perspective of feature smoothness.
We propose a novel algorithm that incorporates the high-order structural information into the graph structure learning.
arXiv Detail & Related papers (2022-03-22T07:03:08Z) - A Robust and Generalized Framework for Adversarial Graph Embedding [73.37228022428663]
We propose a robust framework for adversarial graph embedding, named AGE.
AGE generates the fake neighbor nodes as the enhanced negative samples from the implicit distribution.
Based on this framework, we propose three models to handle three types of graph data.
arXiv Detail & Related papers (2021-05-22T07:05:48Z) - GraphAttacker: A General Multi-Task GraphAttack Framework [4.218118583619758]
Graph Neural Networks (GNNs) have been successfully exploited in graph analysis tasks in many real-world applications.
adversarial samples generated by attackers, which achieved great attack performance with almost imperceptible perturbations.
We propose GraphAttacker, a novel generic graph attack framework that can flexibly adjust the structures and the attack strategies according to the graph analysis tasks.
arXiv Detail & Related papers (2021-01-18T03:06:41Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.