Location Leakage in Federated Signal Maps
- URL: http://arxiv.org/abs/2112.03452v3
- Date: Sat, 6 Jan 2024 00:37:21 GMT
- Title: Location Leakage in Federated Signal Maps
- Authors: Evita Bakopoulou, Mengwei Yang, Jiang Zhang, Konstantinos Psounis,
Athina Markopoulou
- Abstract summary: We consider the problem of predicting cellular network performance (signal maps) from measurements collected by several mobile devices.
We formulate the problem within the online federated learning framework: (i) federated learning enables users to collaboratively train a model, while keeping their training data on their devices.
We consider an honest-but-curious server, who observes the updates from target users participating in FL and infers their location using a deep leakage from gradients (DLG) type of attack.
We build on this observation to protect location privacy, in our setting, by revisiting and designing mechanisms within the federated learning framework including: tuning the FL
- Score: 7.093808731951124
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of predicting cellular network performance (signal
maps) from measurements collected by several mobile devices. We formulate the
problem within the online federated learning framework: (i) federated learning
(FL) enables users to collaboratively train a model, while keeping their
training data on their devices; (ii) measurements are collected as users move
around over time and are used for local training in an online fashion. We
consider an honest-but-curious server, who observes the updates from target
users participating in FL and infers their location using a deep leakage from
gradients (DLG) type of attack, originally developed to reconstruct training
data of DNN image classifiers. We make the key observation that a DLG attack,
applied to our setting, infers the average location of a batch of local data,
and can thus be used to reconstruct the target users' trajectory at a coarse
granularity. We build on this observation to protect location privacy, in our
setting, by revisiting and designing mechanisms within the federated learning
framework including: tuning the FL parameters for averaging, curating local
batches so as to mislead the DLG attacker, and aggregating across multiple
users with different trajectories. We evaluate the performance of our
algorithms through both analysis and simulation based on real-world mobile
datasets, and we show that they achieve a good privacy-utility tradeoff.
Related papers
- Federated Learning under Attack: Improving Gradient Inversion for Batch of Images [1.5749416770494706]
Federated Learning (FL) has emerged as a machine learning approach able to preserve the privacy of user's data.
Deep Leakage from Gradients with Feedback Blending (DLG-FB) is able to improve the inverting gradient attack.
arXiv Detail & Related papers (2024-09-26T12:02:36Z) - Unsupervised Federated Optimization at the Edge: D2D-Enabled Learning without Labels [14.696896223432507]
Federated learning (FL) is a popular solution for distributed machine learning (ML)
tt CF-CL employs local device cooperation where either explicit (i.e., raw data) or implicit (i.e., embeddings) information is exchanged through device-to-device (D2D) communications.
arXiv Detail & Related papers (2024-04-15T15:17:38Z) - Incremental Semi-supervised Federated Learning for Health Inference via
Mobile Sensing [5.434366992553875]
We propose FedMobile, an incremental semi-supervised federated learning algorithm.
We evaluate FedMobile using a real-world mobile sensing dataset for influenza-like symptom recognition.
arXiv Detail & Related papers (2023-12-19T23:39:33Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - eFedDNN: Ensemble based Federated Deep Neural Networks for Trajectory
Mode Inference [7.008213336755055]
GPS datasets may contain users' private information, preventing many users from sharing their private information with a third party.
To address this challenge, we use federated learning (FL), a privacy-preserving machine learning technique.
We show that the proposed inference model can achieve accurate identification of users' mode of travel without compromising privacy.
arXiv Detail & Related papers (2022-05-11T19:58:48Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Federated Stochastic Gradient Descent Begets Self-Induced Momentum [151.4322255230084]
Federated learning (FL) is an emerging machine learning method that can be applied in mobile edge systems.
We show that running to the gradient descent (SGD) in such a setting can be viewed as adding a momentum-like term to the global aggregation process.
arXiv Detail & Related papers (2022-02-17T02:01:37Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Mobility-Aware Cluster Federated Learning in Hierarchical Wireless
Networks [81.83990083088345]
We develop a theoretical model to characterize the hierarchical federated learning (HFL) algorithm in wireless networks.
Our analysis proves that the learning performance of HFL deteriorates drastically with highly-mobile users.
To circumvent these issues, we propose a mobility-aware cluster federated learning (MACFL) algorithm.
arXiv Detail & Related papers (2021-08-20T10:46:58Z) - FedLoc: Federated Learning Framework for Data-Driven Cooperative
Localization and Location Data Processing [12.518673970373422]
Data-driven learning model-based cooperative localization and location data processing are considered.
We first review state-of-the-art algorithms in the context of federated learning.
We demonstrate various practical use cases that are summarized from a mixture of standard, newly published, and unpublished works.
arXiv Detail & Related papers (2020-03-08T01:51:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.