Measuring the Privacy Leakage via Graph Reconstruction Attacks on
Simplicial Neural Networks (Student Abstract)
- URL: http://arxiv.org/abs/2302.04373v1
- Date: Wed, 8 Feb 2023 23:40:24 GMT
- Title: Measuring the Privacy Leakage via Graph Reconstruction Attacks on
Simplicial Neural Networks (Student Abstract)
- Authors: Huixin Zhan, Kun Zhang, Keyi Lu, Victor S. Sheng
- Abstract summary: We study whether graph representations can be inverted to recover the graph used to generate them via graph reconstruction attack (GRA)
We propose a GRA that recovers a graph's adjacency matrix from the representations via a graph decoder.
We find that the SNN outputs reveal the lowest privacy-preserving ability to defend the GRA.
- Score: 25.053461964775778
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we measure the privacy leakage via studying whether graph
representations can be inverted to recover the graph used to generate them via
graph reconstruction attack (GRA). We propose a GRA that recovers a graph's
adjacency matrix from the representations via a graph decoder that minimizes
the reconstruction loss between the partial graph and the reconstructed graph.
We study three types of representations that are trained on the graph, i.e.,
representations output from graph convolutional network (GCN), graph attention
network (GAT), and our proposed simplicial neural network (SNN) via a
higher-order combinatorial Laplacian. Unlike the first two types of
representations that only encode pairwise relationships, the third type of
representation, i.e., SNN outputs, encodes higher-order interactions (e.g.,
homological features) between nodes. We find that the SNN outputs reveal the
lowest privacy-preserving ability to defend the GRA, followed by those of GATs
and GCNs, which indicates the importance of building more private
representations with higher-order node information that could defend the
potential threats, such as GRAs.
Related papers
- Self-Attention Empowered Graph Convolutional Network for Structure
Learning and Node Embedding [5.164875580197953]
In representation learning on graph-structured data, many popular graph neural networks (GNNs) fail to capture long-range dependencies.
This paper proposes a novel graph learning framework called the graph convolutional network with self-attention (GCN-SA)
The proposed scheme exhibits an exceptional generalization capability in node-level representation learning.
arXiv Detail & Related papers (2024-03-06T05:00:31Z) - Graph Transformer GANs with Graph Masked Modeling for Architectural
Layout Generation [153.92387500677023]
We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations.
The proposed graph Transformer encoder combines graph convolutions and self-attentions in a Transformer to model both local and global interactions.
We also propose a novel self-guided pre-training method for graph representation learning.
arXiv Detail & Related papers (2024-01-15T14:36:38Z) - Seq-HGNN: Learning Sequential Node Representation on Heterogeneous Graph [57.2953563124339]
We propose a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN.
We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph Benchmark (OGB)
arXiv Detail & Related papers (2023-05-18T07:27:18Z) - DiP-GNN: Discriminative Pre-Training of Graph Neural Networks [49.19824331568713]
Graph neural network (GNN) pre-training methods have been proposed to enhance the power of GNNs.
One popular pre-training method is to mask out a proportion of the edges, and a GNN is trained to recover them.
In our framework, the graph seen by the discriminator better matches the original graph because the generator can recover a proportion of the masked edges.
arXiv Detail & Related papers (2022-09-15T17:41:50Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Graph Neural Networks with Learnable Structural and Positional
Representations [83.24058411666483]
A major issue with arbitrary graphs is the absence of canonical positional information of nodes.
We introduce Positional nodes (PE) of nodes, and inject it into the input layer, like in Transformers.
We observe a performance increase for molecular datasets, from 2.87% up to 64.14% when considering learnable PE for both GNN classes.
arXiv Detail & Related papers (2021-10-15T05:59:15Z) - Self-supervised Consensus Representation Learning for Attributed Graph [15.729417511103602]
We introduce self-supervised learning mechanism to graph representation learning.
We propose a novel Self-supervised Consensus Representation Learning framework.
Our proposed SCRL method treats graph from two perspectives: topology graph and feature graph.
arXiv Detail & Related papers (2021-08-10T07:53:09Z) - A Robust and Generalized Framework for Adversarial Graph Embedding [73.37228022428663]
We propose a robust framework for adversarial graph embedding, named AGE.
AGE generates the fake neighbor nodes as the enhanced negative samples from the implicit distribution.
Based on this framework, we propose three models to handle three types of graph data.
arXiv Detail & Related papers (2021-05-22T07:05:48Z) - BiGCN: A Bi-directional Low-Pass Filtering Graph Neural Network [35.97496022085212]
Many graph convolutional networks can be regarded as low-pass filters for graph signals.
We propose a new model, BiGCN, which represents a graph neural network as a bi-directional low-pass filter.
Our model outperforms previous graph neural networks in the tasks of node classification and link prediction on most of the benchmark datasets.
arXiv Detail & Related papers (2021-01-14T09:41:00Z) - Hierarchical Representation Learning in Graph Neural Networks with Node Decimation Pooling [31.812988573924674]
In graph neural networks (GNNs), pooling operators compute local summaries of input graphs to capture their global properties.
We propose the Node Decimation Pooling (NDP), a pooling operator for GNNs that generates coarser graphs while preserving the overall graph topology.
NDP is more efficient compared to state-of-the-art graph pooling operators while reaching, at the same time, competitive performance on a significant variety of graph classification tasks.
arXiv Detail & Related papers (2019-10-24T21:42:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.