When Crowdsensing Meets Federated Learning: Privacy-Preserving Mobile
Crowdsensing System
- URL: http://arxiv.org/abs/2102.10109v1
- Date: Sat, 20 Feb 2021 15:34:23 GMT
- Title: When Crowdsensing Meets Federated Learning: Privacy-Preserving Mobile
Crowdsensing System
- Authors: Bowen Zhao, Ximeng Liu, Wei-neng Chen
- Abstract summary: Mobile crowdsensing (MCS) is an emerging sensing data collection pattern with scalability, low deployment cost, and distributed characteristics.
Traditional MCS systems suffer from privacy concerns and fair reward distribution.
In this paper, we propose a privacy-preserving MCS system, called textscCrowdFL.
- Score: 12.087658145293522
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Mobile crowdsensing (MCS) is an emerging sensing data collection pattern with
scalability, low deployment cost, and distributed characteristics. Traditional
MCS systems suffer from privacy concerns and fair reward distribution.
Moreover, existing privacy-preserving MCS solutions usually focus on the
privacy protection of data collection rather than that of data processing. To
tackle faced problems of MCS, in this paper, we integrate federated learning
(FL) into MCS and propose a privacy-preserving MCS system, called
\textsc{CrowdFL}. Specifically, in order to protect privacy, participants
locally process sensing data via federated learning and only upload encrypted
training models. Particularly, a privacy-preserving federated averaging
algorithm is proposed to average encrypted training models. To reduce
computation and communication overhead of restraining dropped participants,
discard and retransmission strategies are designed. Besides, a
privacy-preserving posted pricing incentive mechanism is designed, which tries
to break the dilemma of privacy protection and data evaluation. Theoretical
analysis and experimental evaluation on a practical MCS application demonstrate
the proposed \textsc{CrowdFL} can effectively protect participants privacy and
is feasible and efficient.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Privacy in Federated Learning [0.0]
Federated Learning (FL) represents a significant advancement in distributed machine learning.
This chapter delves into the core privacy concerns within FL, including the risks of data reconstruction, model inversion attacks, and membership inference.
It examines the trade-offs between model accuracy and privacy, emphasizing the importance of balancing these factors in practical implementations.
arXiv Detail & Related papers (2024-08-12T18:41:58Z) - Secure Aggregation is Not Private Against Membership Inference Attacks [66.59892736942953]
We investigate the privacy implications of SecAgg in federated learning.
We show that SecAgg offers weak privacy against membership inference attacks even in a single training round.
Our findings underscore the imperative for additional privacy-enhancing mechanisms, such as noise injection.
arXiv Detail & Related papers (2024-03-26T15:07:58Z) - State-of-the-Art Approaches to Enhancing Privacy Preservation of Machine Learning Datasets: A Survey [0.0]
This paper examines the evolving landscape of machine learning (ML) and its profound impact across various sectors.
It focuses on the emerging field of Privacy-preserving Machine Learning (PPML)
As ML applications become increasingly integral to industries like telecommunications, financial technology, and surveillance, they raise significant privacy concerns.
arXiv Detail & Related papers (2024-02-25T17:31:06Z) - A Learning-based Declarative Privacy-Preserving Framework for Federated Data Management [23.847568516724937]
We introduce a new privacy-preserving technique that uses a deep learning model trained using Differentially-Private Descent (DP-SGD) algorithm.
We then demonstrate a novel declarative privacy-preserving workflow that allows users to specify "what private information to protect" rather than "how to protect"
arXiv Detail & Related papers (2024-01-22T22:50:59Z) - Hide and Seek (HaS): A Lightweight Framework for Prompt Privacy
Protection [6.201275002179716]
We introduce the HaS framework, where "H(ide)" and "S(eek)" represent its two core processes: hiding private entities for anonymization and seeking private entities for de-anonymization.
To quantitatively assess HaS's privacy protection performance, we propose both black-box and white-box adversarial models.
arXiv Detail & Related papers (2023-09-06T14:54:11Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - No Free Lunch in "Privacy for Free: How does Dataset Condensation Help
Privacy" [75.98836424725437]
New methods designed to preserve data privacy require careful scrutiny.
Failure to preserve privacy is hard to detect, and yet can lead to catastrophic results when a system implementing a privacy-preserving'' method is attacked.
arXiv Detail & Related papers (2022-09-29T17:50:23Z) - Protecting Data from all Parties: Combining FHE and DP in Federated
Learning [0.09176056742068812]
We propose a secure framework addressing an extended threat model with respect to privacy of the training data.
The proposed framework protects the privacy of the training data from all participants, namely the training data owners and an aggregating server.
By means of a novel quantization operator, we prove differential privacy guarantees in a context where the noise is quantified and bounded due to the use of homomorphic encryption.
arXiv Detail & Related papers (2022-05-09T14:33:44Z) - Distributed Machine Learning and the Semblance of Trust [66.1227776348216]
Federated Learning (FL) allows the data owner to maintain data governance and perform model training locally without having to share their data.
FL and related techniques are often described as privacy-preserving.
We explain why this term is not appropriate and outline the risks associated with over-reliance on protocols that were not designed with formal definitions of privacy in mind.
arXiv Detail & Related papers (2021-12-21T08:44:05Z) - Distributed Reinforcement Learning for Privacy-Preserving Dynamic Edge
Caching [91.50631418179331]
A privacy-preserving distributed deep policy gradient (P2D3PG) is proposed to maximize the cache hit rates of devices in the MEC networks.
We convert the distributed optimizations into model-free Markov decision process problems and then introduce a privacy-preserving federated learning method for popularity prediction.
arXiv Detail & Related papers (2021-10-20T02:48:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.