Towards Blockchain-Assisted Privacy-Aware Data Sharing For Edge
Intelligence: A Smart Healthcare Perspective
- URL: http://arxiv.org/abs/2306.16630v1
- Date: Thu, 29 Jun 2023 02:06:04 GMT
- Title: Towards Blockchain-Assisted Privacy-Aware Data Sharing For Edge
Intelligence: A Smart Healthcare Perspective
- Authors: Youyang Qu, Lichuan Ma, Wenjie Ye, Xuemeng Zhai, Shui Yu, Yunfeng Li,
and David Smith
- Abstract summary: Linkage attack is a type of dominant attack in the privacy domain.
adversaries launch poisoning attacks to falsify the health data, which leads to misdiagnosing or even physical damage.
To protect private health data, we propose a personalized differential privacy model based on the trust levels among users.
- Score: 19.208368632576153
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The popularization of intelligent healthcare devices and big data analytics
significantly boosts the development of smart healthcare networks (SHNs). To
enhance the precision of diagnosis, different participants in SHNs share health
data that contains sensitive information. Therefore, the data exchange process
raises privacy concerns, especially when the integration of health data from
multiple sources (linkage attack) results in further leakage. Linkage attack is
a type of dominant attack in the privacy domain, which can leverage various
data sources for private data mining. Furthermore, adversaries launch poisoning
attacks to falsify the health data, which leads to misdiagnosing or even
physical damage. To protect private health data, we propose a personalized
differential privacy model based on the trust levels among users. The trust is
evaluated by a defined community density, while the corresponding privacy
protection level is mapped to controllable randomized noise constrained by
differential privacy. To avoid linkage attacks in personalized differential
privacy, we designed a noise correlation decoupling mechanism using a Markov
stochastic process. In addition, we build the community model on a blockchain,
which can mitigate the risk of poisoning attacks during differentially private
data transmission over SHNs. To testify the effectiveness and superiority of
the proposed approach, we conduct extensive experiments on benchmark datasets.
Related papers
- Privacy-Preserving Heterogeneous Federated Learning for Sensitive Healthcare Data [12.30620268528346]
We propose a new framework termed Abstention-Aware Federated Voting (AAFV)
AAFV can collaboratively and confidentially train heterogeneous local models while simultaneously protecting the data privacy.
In particular, the proposed abstention-aware voting mechanism exploits a threshold-based abstention method to select high-confidence votes from heterogeneous local models.
arXiv Detail & Related papers (2024-06-15T08:43:40Z) - Group Decision-Making among Privacy-Aware Agents [2.4401219403555814]
Preserving individual privacy and enabling efficient social learning are both important desiderata but seem fundamentally at odds with each other.
We do so by controlling information leakage using rigorous statistical guarantees that are based on differential privacy (DP)
Our results flesh out the nature of the trade-offs in both cases between the quality of the group decision outcomes, learning accuracy, communication cost, and the level of privacy protections that the agents are afforded.
arXiv Detail & Related papers (2024-02-13T01:38:01Z) - Initialization Matters: Privacy-Utility Analysis of Overparameterized
Neural Networks [72.51255282371805]
We prove a privacy bound for the KL divergence between model distributions on worst-case neighboring datasets.
We find that this KL privacy bound is largely determined by the expected squared gradient norm relative to model parameters during training.
arXiv Detail & Related papers (2023-10-31T16:13:22Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - Blockchain-empowered Federated Learning for Healthcare Metaverses:
User-centric Incentive Mechanism with Optimal Data Freshness [66.3982155172418]
We first design a user-centric privacy-preserving framework based on decentralized Federated Learning (FL) for healthcare metaverses.
We then utilize Age of Information (AoI) as an effective data-freshness metric and propose an AoI-based contract theory model under Prospect Theory (PT) to motivate sensing data sharing.
arXiv Detail & Related papers (2023-07-29T12:54:03Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - Mitigating Leakage from Data Dependent Communications in Decentralized
Computing using Differential Privacy [1.911678487931003]
We propose a general execution model to control the data-dependence of communications in user-side decentralized computations.
Our formal privacy guarantees leverage and extend recent results on privacy amplification by shuffling.
arXiv Detail & Related papers (2021-12-23T08:30:17Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Deep Directed Information-Based Learning for Privacy-Preserving Smart
Meter Data Release [30.409342804445306]
We study the problem in the context of time series data and smart meters (SMs) power consumption measurements.
We introduce the Directed Information (DI) as a more meaningful measure of privacy in the considered setting.
Our empirical studies on real-world data sets from SMs measurements in the worst-case scenario show the existing trade-offs between privacy and utility.
arXiv Detail & Related papers (2020-11-20T13:41:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.