Privacy-preserving Graph Analytics: Secure Generation and Federated
Learning
- URL: http://arxiv.org/abs/2207.00048v1
- Date: Thu, 30 Jun 2022 18:26:57 GMT
- Title: Privacy-preserving Graph Analytics: Secure Generation and Federated
Learning
- Authors: Dongqi Fu, Jingrui He, Hanghang Tong, Ross Maciejewski
- Abstract summary: We focus on the privacy-preserving analysis of graph data, which provides the crucial capacity to represent rich attributes and relationships.
We discuss two directions, namely privacy-preserving graph generation and federated graph learning, which can jointly enable the collaboration among multiple parties each possessing private graph data.
- Score: 72.90158604032194
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Directly motivated by security-related applications from the Homeland
Security Enterprise, we focus on the privacy-preserving analysis of graph data,
which provides the crucial capacity to represent rich attributes and
relationships. In particular, we discuss two directions, namely
privacy-preserving graph generation and federated graph learning, which can
jointly enable the collaboration among multiple parties each possessing private
graph data. For each direction, we identify both "quick wins" and "hard
problems". Towards the end, we demonstrate a user interface that can facilitate
model explanation, interpretation, and visualization. We believe that the
techniques developed in these directions will significantly enhance the
capabilities of the Homeland Security Enterprise to tackle and mitigate the
various security risks.
Related papers
- A Survey of Graph Unlearning [11.841882902141696]
Graph unlearning provides the means to remove sensitive data traces from trained models, thereby upholding the right to be forgotten.
We present the first systematic review of graph unlearning approaches, encompassing a diverse array of methodologies.
We explore the versatility of graph unlearning across various domains, including but not limited to social networks, adversarial settings, and resource-constrained environments.
arXiv Detail & Related papers (2023-08-23T20:50:52Z) - Independent Distribution Regularization for Private Graph Embedding [55.24441467292359]
Graph embeddings are susceptible to attribute inference attacks, which allow attackers to infer private node attributes from the learned graph embeddings.
To address these concerns, privacy-preserving graph embedding methods have emerged.
We propose a novel approach called Private Variational Graph AutoEncoders (PVGAE) with the aid of independent distribution penalty as a regularization term.
arXiv Detail & Related papers (2023-08-16T13:32:43Z) - Privacy-Preserving Graph Machine Learning from Data to Computation: A
Survey [67.7834898542701]
We focus on reviewing privacy-preserving techniques of graph machine learning.
We first review methods for generating privacy-preserving graph data.
Then we describe methods for transmitting privacy-preserved information.
arXiv Detail & Related papers (2023-07-10T04:30:23Z) - Heterogeneous Graph Neural Network for Privacy-Preserving Recommendation [25.95411320126426]
Social networks are considered to be heterogeneous graph neural networks (HGNNs) with deep learning technological advances.
We propose a novel heterogeneous graph neural network privacy-preserving method based on a differential privacy mechanism named HeteDP.
arXiv Detail & Related papers (2022-10-02T14:41:02Z) - A Survey of Trustworthy Graph Learning: Reliability, Explainability, and
Privacy Protection [136.71290968343826]
Trustworthy graph learning (TwGL) aims to solve the above problems from a technical viewpoint.
In contrast to conventional graph learning research which mainly cares about model performance, TwGL considers various reliability and safety aspects.
arXiv Detail & Related papers (2022-05-20T08:10:35Z) - Privacy-Preserving Representation Learning on Graphs: A Mutual
Information Perspective [44.53121844947585]
Existing representation learning methods on graphs could leak serious private information.
We propose a privacy-preserving representation learning framework on graphs from the emphmutual information perspective.
arXiv Detail & Related papers (2021-07-03T18:09:44Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.