Federated Crowdsensing: Framework and Challenges
- URL: http://arxiv.org/abs/2011.03208v1
- Date: Fri, 6 Nov 2020 06:49:11 GMT
- Title: Federated Crowdsensing: Framework and Challenges
- Authors: Leye Wang, Han Yu, Xiao Han
- Abstract summary: Crowdsensing is a promising sensing paradigm for smart city applications.
Privacy protection is one of the key issues in crowdsensing systems.
We propose a federated crowdsensing framework, which analyzes the privacy concerns of each crowdsensing stage.
- Score: 20.110862329289272
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Crowdsensing is a promising sensing paradigm for smart city applications
(e.g., traffic and environment monitoring) with the prevalence of smart mobile
devices and advanced network infrastructure. Meanwhile, as tasks are performed
by individuals, privacy protection is one of the key issues in crowdsensing
systems. Traditionally, to alleviate users' privacy concerns, noises are added
to participants' sensitive data (e.g., participants' locations) through
techniques such as differential privacy. However, this inevitably results in
quality loss to the crowdsensing task. Recently, federated learning paradigm
has been proposed, which aims to achieve privacy preservation in machine
learning while ensuring that the learning quality suffers little or no loss.
Inspired by the federated learning paradigm, this article studies how federated
learning may benefit crowdsensing applications. In particular, we first propose
a federated crowdsensing framework, which analyzes the privacy concerns of each
crowdsensing stage (i.e., task creation, task assignment, task execution, and
data aggregation) and discuss how federated learning techniques may take
effect. Finally, we summarize key challenges and opportunities in federated
crowdsensing.
Related papers
- Secure Visual Data Processing via Federated Learning [2.4374097382908477]
This paper addresses the need for privacy-preserving solutions in large-scale visual data processing.
We propose a new approach that combines object detection, federated learning and anonymization.
Our solution is evaluated against traditional centralized models, showing that while there is a slight trade-off in accuracy, the privacy benefits are substantial.
arXiv Detail & Related papers (2025-02-09T09:44:18Z) - TAPFed: Threshold Secure Aggregation for Privacy-Preserving Federated Learning [16.898842295300067]
Federated learning is a computing paradigm that enhances privacy by enabling multiple parties to collaboratively train a machine learning model without revealing personal data.
Traditional federated learning platforms are unable to ensure privacy due to privacy leaks caused by the interchange of gradients.
This paper proposes TAPFed, an approach for achieving privacy-preserving federated learning in the context of multiple decentralized aggregators with malicious actors.
arXiv Detail & Related papers (2025-01-09T08:24:10Z) - Concurrent vertical and horizontal federated learning with fuzzy cognitive maps [1.104960878651584]
This research introduces a novel federated learning framework employing fuzzy cognitive maps.
It is designed to comprehensively address the challenges posed by diverse data distributions and non-identically distributed features.
The results demonstrate the effectiveness of the approach in achieving the desired learning outcomes while maintaining privacy and confidentiality standards.
arXiv Detail & Related papers (2024-12-17T12:11:14Z) - Collaborative Inference over Wireless Channels with Feature Differential Privacy [57.68286389879283]
Collaborative inference among multiple wireless edge devices has the potential to significantly enhance Artificial Intelligence (AI) applications.
transmitting extracted features poses a significant privacy risk, as sensitive personal data can be exposed during the process.
We propose a novel privacy-preserving collaborative inference mechanism, wherein each edge device in the network secures the privacy of extracted features before transmitting them to a central server for inference.
arXiv Detail & Related papers (2024-10-25T18:11:02Z) - Privacy in Federated Learning [0.0]
Federated Learning (FL) represents a significant advancement in distributed machine learning.
This chapter delves into the core privacy concerns within FL, including the risks of data reconstruction, model inversion attacks, and membership inference.
It examines the trade-offs between model accuracy and privacy, emphasizing the importance of balancing these factors in practical implementations.
arXiv Detail & Related papers (2024-08-12T18:41:58Z) - Linkage on Security, Privacy and Fairness in Federated Learning: New Balances and New Perspectives [48.48294460952039]
This survey offers comprehensive descriptions of the privacy, security, and fairness issues in federated learning.
We contend that there exists a trade-off between privacy and fairness and between security and sharing.
arXiv Detail & Related papers (2024-06-16T10:31:45Z) - Exploring Federated Unlearning: Analysis, Comparison, and Insights [101.64910079905566]
federated unlearning enables the selective removal of data from models trained in federated systems.
This paper examines existing federated unlearning approaches, examining their algorithmic efficiency, impact on model accuracy, and effectiveness in preserving privacy.
We propose the OpenFederatedUnlearning framework, a unified benchmark for evaluating federated unlearning methods.
arXiv Detail & Related papers (2023-10-30T01:34:33Z) - Security and Privacy Issues of Federated Learning [0.0]
Federated Learning (FL) has emerged as a promising approach to address data privacy and confidentiality concerns.
This paper presents a comprehensive taxonomy of security and privacy challenges in Federated Learning (FL) across various machine learning models.
arXiv Detail & Related papers (2023-07-22T22:51:07Z) - A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual Learning [58.107474025048866]
Forgetting refers to the loss or deterioration of previously acquired knowledge.
Forgetting is a prevalent phenomenon observed in various other research domains within deep learning.
arXiv Detail & Related papers (2023-07-16T16:27:58Z) - A Survey of Trustworthy Federated Learning with Perspectives on
Security, Robustness, and Privacy [47.89042524852868]
Federated Learning (FL) stands out as a promising solution for diverse real-world scenarios.
However, challenges around data isolation and privacy threaten the trustworthiness of FL systems.
arXiv Detail & Related papers (2023-02-21T12:52:12Z) - Improving Federated Learning Face Recognition via Privacy-Agnostic
Clusters [7.437386882362172]
This work proposes PrivacyFace, a framework to improve federated learning face recognition.
It consists of two components: First, a practical Differentially Private Local Clustering mechanism is proposed to distill sanitized clusters from local class centers.
Second, a consensus-aware recognition loss subsequently encourages global consensuses among clients, which ergo results in more discriminative features.
arXiv Detail & Related papers (2022-01-29T01:27:04Z) - Federated Learning: A Signal Processing Perspective [144.63726413692876]
Federated learning is an emerging machine learning paradigm for training models across multiple edge devices holding local datasets, without explicitly exchanging the data.
This article provides a unified systematic framework for federated learning in a manner that encapsulates and highlights the main challenges that are natural to treat using signal processing tools.
arXiv Detail & Related papers (2021-03-31T15:14:39Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z) - Federated Learning and Differential Privacy: Software tools analysis,
the Sherpa.ai FL framework and methodological guidelines for preserving data
privacy [8.30788601976591]
We present the Sherpa.ai Federated Learning framework that is built upon an holistic view of federated learning and differential privacy.
We show how to follow the methodological guidelines with the Sherpa.ai Federated Learning framework by means of a classification and a regression use cases.
arXiv Detail & Related papers (2020-07-02T06:47:35Z) - A Review of Privacy-preserving Federated Learning for the
Internet-of-Things [3.3517146652431378]
This work reviews federated learning as an approach for performing machine learning on distributed data.
We aim to protect the privacy of user-generated data as well as reducing communication costs associated with data transfer.
We identify the strengths and weaknesses of different methods applied to federated learning.
arXiv Detail & Related papers (2020-04-24T15:27:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.