Federated Crowdsensing: Framework and Challenges
- URL: http://arxiv.org/abs/2011.03208v1
- Date: Fri, 6 Nov 2020 06:49:11 GMT
- Title: Federated Crowdsensing: Framework and Challenges
- Authors: Leye Wang, Han Yu, Xiao Han
- Abstract summary: Crowdsensing is a promising sensing paradigm for smart city applications.
Privacy protection is one of the key issues in crowdsensing systems.
We propose a federated crowdsensing framework, which analyzes the privacy concerns of each crowdsensing stage.
- Score: 20.110862329289272
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Crowdsensing is a promising sensing paradigm for smart city applications
(e.g., traffic and environment monitoring) with the prevalence of smart mobile
devices and advanced network infrastructure. Meanwhile, as tasks are performed
by individuals, privacy protection is one of the key issues in crowdsensing
systems. Traditionally, to alleviate users' privacy concerns, noises are added
to participants' sensitive data (e.g., participants' locations) through
techniques such as differential privacy. However, this inevitably results in
quality loss to the crowdsensing task. Recently, federated learning paradigm
has been proposed, which aims to achieve privacy preservation in machine
learning while ensuring that the learning quality suffers little or no loss.
Inspired by the federated learning paradigm, this article studies how federated
learning may benefit crowdsensing applications. In particular, we first propose
a federated crowdsensing framework, which analyzes the privacy concerns of each
crowdsensing stage (i.e., task creation, task assignment, task execution, and
data aggregation) and discuss how federated learning techniques may take
effect. Finally, we summarize key challenges and opportunities in federated
crowdsensing.
Related papers
- Linkage on Security, Privacy and Fairness in Federated Learning: New Balances and New Perspectives [48.48294460952039]
This survey offers comprehensive descriptions of the privacy, security, and fairness issues in federated learning.
We contend that there exists a trade-off between privacy and fairness and between security and sharing.
arXiv Detail & Related papers (2024-06-16T10:31:45Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - Security and Privacy Issues of Federated Learning [0.0]
Federated Learning (FL) has emerged as a promising approach to address data privacy and confidentiality concerns.
This paper presents a comprehensive taxonomy of security and privacy challenges in Federated Learning (FL) across various machine learning models.
arXiv Detail & Related papers (2023-07-22T22:51:07Z) - A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual
Learning [76.47138162283714]
Forgetting refers to the loss or deterioration of previously acquired information or knowledge.
Forgetting is a prevalent phenomenon observed in various other research domains within deep learning.
Survey argues that forgetting is a double-edged sword and can be beneficial and desirable in certain cases.
arXiv Detail & Related papers (2023-07-16T16:27:58Z) - A Survey of Trustworthy Federated Learning with Perspectives on
Security, Robustness, and Privacy [47.89042524852868]
Federated Learning (FL) stands out as a promising solution for diverse real-world scenarios.
However, challenges around data isolation and privacy threaten the trustworthiness of FL systems.
arXiv Detail & Related papers (2023-02-21T12:52:12Z) - Improving Federated Learning Face Recognition via Privacy-Agnostic
Clusters [7.437386882362172]
This work proposes PrivacyFace, a framework to improve federated learning face recognition.
It consists of two components: First, a practical Differentially Private Local Clustering mechanism is proposed to distill sanitized clusters from local class centers.
Second, a consensus-aware recognition loss subsequently encourages global consensuses among clients, which ergo results in more discriminative features.
arXiv Detail & Related papers (2022-01-29T01:27:04Z) - Federated Learning: A Signal Processing Perspective [144.63726413692876]
Federated learning is an emerging machine learning paradigm for training models across multiple edge devices holding local datasets, without explicitly exchanging the data.
This article provides a unified systematic framework for federated learning in a manner that encapsulates and highlights the main challenges that are natural to treat using signal processing tools.
arXiv Detail & Related papers (2021-03-31T15:14:39Z) - DiPSeN: Differentially Private Self-normalizing Neural Networks For
Adversarial Robustness in Federated Learning [6.1448102196124195]
Federated learning has proven to help protect against privacy violations and information leakage.
It introduces new risk vectors which make machine learning models more difficult to defend against adversarial samples.
We introduce DiPSeN, a Differentially Private Self-normalizing Neural Network which combines elements of differential privacy noise with self-normalizing techniques.
arXiv Detail & Related papers (2021-01-08T20:49:56Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z) - Federated Learning and Differential Privacy: Software tools analysis,
the Sherpa.ai FL framework and methodological guidelines for preserving data
privacy [8.30788601976591]
We present the Sherpa.ai Federated Learning framework that is built upon an holistic view of federated learning and differential privacy.
We show how to follow the methodological guidelines with the Sherpa.ai Federated Learning framework by means of a classification and a regression use cases.
arXiv Detail & Related papers (2020-07-02T06:47:35Z) - A Review of Privacy-preserving Federated Learning for the
Internet-of-Things [3.3517146652431378]
This work reviews federated learning as an approach for performing machine learning on distributed data.
We aim to protect the privacy of user-generated data as well as reducing communication costs associated with data transfer.
We identify the strengths and weaknesses of different methods applied to federated learning.
arXiv Detail & Related papers (2020-04-24T15:27:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.