Privacy-Preserving Individual-Level COVID-19 Infection Prediction via
Federated Graph Learning
- URL: http://arxiv.org/abs/2311.06049v1
- Date: Fri, 10 Nov 2023 13:22:14 GMT
- Title: Privacy-Preserving Individual-Level COVID-19 Infection Prediction via
Federated Graph Learning
- Authors: Wenjie Fu, Huandong Wang, Chen Gao, Guanghua Liu, Yong Li, Tao Jiang
- Abstract summary: We focus on developing a framework of privacy-preserving individual-level infection prediction based on federated learning (FL) and graph neural networks (GNN)
We propose Falcon, a Federated grAph Learning method for privacy-preserving individual-levelfetion predictiON.
Our methodology outperforms state-of-the-art algorithms and is able to protect user privacy against actual privacy attacks.
- Score: 33.77030569632993
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurately predicting individual-level infection state is of great value
since its essential role in reducing the damage of the epidemic. However, there
exists an inescapable risk of privacy leakage in the fine-grained user mobility
trajectories required by individual-level infection prediction. In this paper,
we focus on developing a framework of privacy-preserving individual-level
infection prediction based on federated learning (FL) and graph neural networks
(GNN). We propose Falcon, a Federated grAph Learning method for
privacy-preserving individual-level infeCtion predictiON. It utilizes a novel
hypergraph structure with spatio-temporal hyperedges to describe the complex
interactions between individuals and locations in the contagion process. By
organically combining the FL framework with hypergraph neural networks, the
information propagation process of the graph machine learning is able to be
divided into two stages distributed on the server and the clients,
respectively, so as to effectively protect user privacy while transmitting
high-level information. Furthermore, it elaborately designs a differential
privacy perturbation mechanism as well as a plausible pseudo location
generation approach to preserve user privacy in the graph structure. Besides,
it introduces a cooperative coupling mechanism between the individual-level
prediction model and an additional region-level model to mitigate the
detrimental impacts caused by the injected obfuscation mechanisms. Extensive
experimental results show that our methodology outperforms state-of-the-art
algorithms and is able to protect user privacy against actual privacy attacks.
Our code and datasets are available at the link:
https://github.com/wjfu99/FL-epidemic.
Related papers
- Privacy-Preserving Heterogeneous Federated Learning for Sensitive Healthcare Data [12.30620268528346]
We propose a new framework termed Abstention-Aware Federated Voting (AAFV)
AAFV can collaboratively and confidentially train heterogeneous local models while simultaneously protecting the data privacy.
In particular, the proposed abstention-aware voting mechanism exploits a threshold-based abstention method to select high-confidence votes from heterogeneous local models.
arXiv Detail & Related papers (2024-06-15T08:43:40Z) - Initialization Matters: Privacy-Utility Analysis of Overparameterized
Neural Networks [72.51255282371805]
We prove a privacy bound for the KL divergence between model distributions on worst-case neighboring datasets.
We find that this KL privacy bound is largely determined by the expected squared gradient norm relative to model parameters during training.
arXiv Detail & Related papers (2023-10-31T16:13:22Z) - Local Differential Privacy in Graph Neural Networks: a Reconstruction Approach [17.000441871334683]
We propose a learning framework that can provide node privacy at the user level, while incurring low utility loss.
We focus on a decentralized notion of Differential Privacy, namely Local Differential Privacy.
We develop reconstruction methods to approximate features and labels from perturbed data.
arXiv Detail & Related papers (2023-09-15T17:35:51Z) - Independent Distribution Regularization for Private Graph Embedding [55.24441467292359]
Graph embeddings are susceptible to attribute inference attacks, which allow attackers to infer private node attributes from the learned graph embeddings.
To address these concerns, privacy-preserving graph embedding methods have emerged.
We propose a novel approach called Private Variational Graph AutoEncoders (PVGAE) with the aid of independent distribution penalty as a regularization term.
arXiv Detail & Related papers (2023-08-16T13:32:43Z) - Privacy-Preserved Neural Graph Similarity Learning [99.78599103903777]
We propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning.
To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices.
To alleviate the attacks to graph properties, the obfuscated features that contain information from both vectors are communicated.
arXiv Detail & Related papers (2022-10-21T04:38:25Z) - Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations [57.20885629270732]
We consider privacy aspects of wireless federated learning with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server.
Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy.
In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server.
arXiv Detail & Related papers (2022-10-05T13:13:35Z) - Heterogeneous Graph Neural Network for Privacy-Preserving Recommendation [25.95411320126426]
Social networks are considered to be heterogeneous graph neural networks (HGNNs) with deep learning technological advances.
We propose a novel heterogeneous graph neural network privacy-preserving method based on a differential privacy mechanism named HeteDP.
arXiv Detail & Related papers (2022-10-02T14:41:02Z) - BEAS: Blockchain Enabled Asynchronous & Secure Federated Machine
Learning [0.0]
We present BEAS, the first blockchain-based framework for N-party Federated Learning.
It provides strict privacy guarantees of training data using gradient pruning.
Anomaly detection protocols are used to minimize the risk of data-poisoning attacks.
We also define a novel protocol to prevent premature convergence in heterogeneous learning environments.
arXiv Detail & Related papers (2022-02-06T17:11:14Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Privacy-preserving Traffic Flow Prediction: A Federated Learning
Approach [61.64006416975458]
We propose a privacy-preserving machine learning technique named Federated Learning-based Gated Recurrent Unit neural network algorithm (FedGRU) for traffic flow prediction.
FedGRU differs from current centralized learning methods and updates universal learning models through a secure parameter aggregation mechanism.
It is shown that FedGRU's prediction accuracy is 90.96% higher than the advanced deep learning models.
arXiv Detail & Related papers (2020-03-19T13:07:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.