Federated Learning for Localization: A Privacy-Preserving Crowdsourcing
Method
- URL: http://arxiv.org/abs/2001.01911v2
- Date: Tue, 4 Feb 2020 10:17:12 GMT
- Title: Federated Learning for Localization: A Privacy-Preserving Crowdsourcing
Method
- Authors: Bekir Sait Ciftler, Abdullatif Albaseer, Noureddine Lasla, Mohamed
Abdallah
- Abstract summary: Received Signal Strength ( RSS) fingerprint-based localization has attracted a lot of research effort and cultivated many commercial applications.
DL's ability to extract features and to classify autonomously makes it an attractive solution for fingerprint-based localization.
This paper presents a novel method utilizing federated learning to improve the accuracy of RSS fingerprint-based localization.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Received Signal Strength (RSS) fingerprint-based localization has attracted a
lot of research effort and cultivated many commercial applications of
location-based services due to its low cost and ease of implementation. Many
studies are exploring the use of deep learning (DL) algorithms for
localization. DL's ability to extract features and to classify autonomously
makes it an attractive solution for fingerprint-based localization. These
solutions require frequent retraining of DL models with vast amounts of
measurements. Although crowdsourcing is an excellent way to gather immense
amounts of data, it jeopardizes the privacy of participants, as it requires to
collect labeled data at a centralized server. Recently, federated learning has
emerged as a practical concept in solving the privacy preservation issue of
crowdsourcing participants by performing model training at the edge devices in
a decentralized manner; the participants do not expose their data anymore to a
centralized server. This paper presents a novel method utilizing federated
learning to improve the accuracy of RSS fingerprint-based localization while
preserving the privacy of the crowdsourcing participants. Employing federated
learning allows ensuring \emph{preserving the privacy of user data} while
enabling an adequate localization performance with experimental data captured
in real-world settings. The proposed method improved localization accuracy by
1.8 meters when used as a booster for centralized learning and achieved
satisfactory localization accuracy when used standalone.
Related papers
- Knowledge-Aware Federated Active Learning with Non-IID Data [75.98707107158175]
We propose a federated active learning paradigm to efficiently learn a global model with limited annotation budget.
The main challenge faced by federated active learning is the mismatch between the active sampling goal of the global model on the server and that of the local clients.
We propose Knowledge-Aware Federated Active Learning (KAFAL), which consists of Knowledge-Specialized Active Sampling (KSAS) and Knowledge-Compensatory Federated Update (KCFU)
arXiv Detail & Related papers (2022-11-24T13:08:43Z) - Preserving Privacy in Federated Learning with Ensemble Cross-Domain
Knowledge Distillation [22.151404603413752]
Federated Learning (FL) is a machine learning paradigm where local nodes collaboratively train a central model.
Existing FL methods typically share model parameters or employ co-distillation to address the issue of unbalanced data distribution.
We develop a privacy preserving and communication efficient method in a FL framework with one-shot offline knowledge distillation.
arXiv Detail & Related papers (2022-09-10T05:20:31Z) - DisPFL: Towards Communication-Efficient Personalized Federated Learning
via Decentralized Sparse Training [84.81043932706375]
We propose a novel personalized federated learning framework in a decentralized (peer-to-peer) communication protocol named Dis-PFL.
Dis-PFL employs personalized sparse masks to customize sparse local models on the edge.
We demonstrate that our method can easily adapt to heterogeneous local clients with varying computation complexities.
arXiv Detail & Related papers (2022-06-01T02:20:57Z) - Personalization Improves Privacy-Accuracy Tradeoffs in Federated
Optimization [57.98426940386627]
We show that coordinating local learning with private centralized learning yields a generically useful and improved tradeoff between accuracy and privacy.
We illustrate our theoretical results with experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2022-02-10T20:44:44Z) - DQRE-SCnet: A novel hybrid approach for selecting users in Federated
Learning with Deep-Q-Reinforcement Learning based on Spectral Clustering [1.174402845822043]
Machine learning models based on sensitive data in the real-world promise advances in areas ranging from medical screening to disease outbreaks, agriculture, industry, defense science, and more.
In many applications, learning participant communication rounds benefit from collecting their own private data sets, teaching detailed machine learning models on the real data, and sharing the benefits of using these models.
Due to existing privacy and security concerns, most people avoid sensitive data sharing for training. Without each user demonstrating their local data to a central server, Federated Learning allows various parties to train a machine learning algorithm on their shared data jointly.
arXiv Detail & Related papers (2021-11-07T15:14:29Z) - Semi-supervised Federated Learning for Activity Recognition [9.720890017788676]
Training deep learning models on in-home IoT sensory data is commonly used to recognise human activities.
Recently, federated learning systems that use edge devices as clients to support local human activity recognition have emerged.
We propose an activity recognition system that uses semi-supervised federated learning.
arXiv Detail & Related papers (2020-11-02T09:47:14Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z) - FedOCR: Communication-Efficient Federated Learning for Scene Text
Recognition [76.26472513160425]
We study how to make use of decentralized datasets for training a robust scene text recognizer.
To make FedOCR fairly suitable to be deployed on end devices, we make two improvements including using lightweight models and hashing techniques.
arXiv Detail & Related papers (2020-07-22T14:30:50Z) - Decentralised Learning from Independent Multi-Domain Labels for Person
Re-Identification [69.29602103582782]
Deep learning has been successful for many computer vision tasks due to the availability of shared and centralised large-scale training data.
However, increasing awareness of privacy concerns poses new challenges to deep learning, especially for person re-identification (Re-ID)
We propose a novel paradigm called Federated Person Re-Identification (FedReID) to construct a generalisable global model (a central server) by simultaneously learning with multiple privacy-preserved local models (local clients)
This client-server collaborative learning process is iteratively performed under privacy control, enabling FedReID to realise decentralised learning without sharing distributed data nor collecting any
arXiv Detail & Related papers (2020-06-07T13:32:33Z) - Concentrated Differentially Private and Utility Preserving Federated
Learning [24.239992194656164]
Federated learning is a machine learning setting where a set of edge devices collaboratively train a model under the orchestration of a central server.
In this paper, we develop a federated learning approach that addresses the privacy challenge without much degradation on model utility.
We provide a tight end-to-end privacy guarantee of our approach and analyze its theoretical convergence rates.
arXiv Detail & Related papers (2020-03-30T19:20:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.