Free Lunch for Privacy Preserving Distributed Graph Learning
- URL: http://arxiv.org/abs/2305.10869v2
- Date: Fri, 19 May 2023 05:06:44 GMT
- Title: Free Lunch for Privacy Preserving Distributed Graph Learning
- Authors: Nimesh Agrawal, Nikita Malik, Sandeep Kumar
- Abstract summary: We present a novel privacy-respecting framework for distributed graph learning and graph-based machine learning.
This framework aims to learn features as well as distances without requiring actual features while preserving the original structural properties of the raw data.
- Score: 1.8292714902548342
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning on graphs is becoming prevalent in a wide range of applications
including social networks, robotics, communication, medicine, etc. These
datasets belonging to entities often contain critical private information. The
utilization of data for graph learning applications is hampered by the growing
privacy concerns from users on data sharing. Existing privacy-preserving
methods pre-process the data to extract user-side features, and only these
features are used for subsequent learning. Unfortunately, these methods are
vulnerable to adversarial attacks to infer private attributes. We present a
novel privacy-respecting framework for distributed graph learning and
graph-based machine learning. In order to perform graph learning and other
downstream tasks on the server side, this framework aims to learn features as
well as distances without requiring actual features while preserving the
original structural properties of the raw data. The proposed framework is quite
generic and highly adaptable. We demonstrate the utility of the Euclidean
space, but it can be applied with any existing method of distance approximation
and graph learning for the relevant spaces. Through extensive experimentation
on both synthetic and real datasets, we demonstrate the efficacy of the
framework in terms of comparing the results obtained without data sharing to
those obtained with data sharing as a benchmark. This is, to our knowledge, the
first privacy-preserving distributed graph learning framework.
Related papers
- Independent Distribution Regularization for Private Graph Embedding [55.24441467292359]
Graph embeddings are susceptible to attribute inference attacks, which allow attackers to infer private node attributes from the learned graph embeddings.
To address these concerns, privacy-preserving graph embedding methods have emerged.
We propose a novel approach called Private Variational Graph AutoEncoders (PVGAE) with the aid of independent distribution penalty as a regularization term.
arXiv Detail & Related papers (2023-08-16T13:32:43Z) - Privacy-Preserving Graph Machine Learning from Data to Computation: A
Survey [67.7834898542701]
We focus on reviewing privacy-preserving techniques of graph machine learning.
We first review methods for generating privacy-preserving graph data.
Then we describe methods for transmitting privacy-preserved information.
arXiv Detail & Related papers (2023-07-10T04:30:23Z) - Approximate, Adapt, Anonymize (3A): a Framework for Privacy Preserving
Training Data Release for Machine Learning [3.29354893777827]
We introduce a data release framework, 3A (Approximate, Adapt, Anonymize), to maximize data utility for machine learning.
We present experimental evidence showing minimal discrepancy between performance metrics of models trained on real versus privatized datasets.
arXiv Detail & Related papers (2023-07-04T18:37:11Z) - Privacy-Preserving Representation Learning on Graphs: A Mutual
Information Perspective [44.53121844947585]
Existing representation learning methods on graphs could leak serious private information.
We propose a privacy-preserving representation learning framework on graphs from the emphmutual information perspective.
arXiv Detail & Related papers (2021-07-03T18:09:44Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - Learnable Graph Matching: Incorporating Graph Partitioning with Deep
Feature Learning for Multiple Object Tracking [58.30147362745852]
Data association across frames is at the core of Multiple Object Tracking (MOT) task.
Existing methods mostly ignore the context information among tracklets and intra-frame detections.
We propose a novel learnable graph matching method to address these issues.
arXiv Detail & Related papers (2021-03-30T08:58:45Z) - TIPRDC: Task-Independent Privacy-Respecting Data Crowdsourcing Framework
for Deep Learning with Anonymized Intermediate Representations [49.20701800683092]
We present TIPRDC, a task-independent privacy-respecting data crowdsourcing framework with anonymized intermediate representation.
The goal of this framework is to learn a feature extractor that can hide the privacy information from the intermediate representations; while maximally retaining the original information embedded in the raw data for the data collector to accomplish unknown learning tasks.
arXiv Detail & Related papers (2020-05-23T06:21:26Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.