eFedDNN: Ensemble based Federated Deep Neural Networks for Trajectory
Mode Inference
- URL: http://arxiv.org/abs/2205.05756v1
- Date: Wed, 11 May 2022 19:58:48 GMT
- Title: eFedDNN: Ensemble based Federated Deep Neural Networks for Trajectory
Mode Inference
- Authors: Daniel Opoku Mensah and Godwin Badu-Marfo and Ranwa Al Mallah and
Bilal Farooq
- Abstract summary: GPS datasets may contain users' private information, preventing many users from sharing their private information with a third party.
To address this challenge, we use federated learning (FL), a privacy-preserving machine learning technique.
We show that the proposed inference model can achieve accurate identification of users' mode of travel without compromising privacy.
- Score: 7.008213336755055
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As the most significant data source in smart mobility systems, GPS
trajectories can help identify user travel mode. However, these GPS datasets
may contain users' private information (e.g., home location), preventing many
users from sharing their private information with a third party. Hence,
identifying travel modes while protecting users' privacy is a significant
issue. To address this challenge, we use federated learning (FL), a
privacy-preserving machine learning technique that aims at collaboratively
training a robust global model by accessing users' locally trained models but
not their raw data. Specifically, we designed a novel ensemble-based Federated
Deep Neural Network (eFedDNN). The ensemble method combines the outputs of the
different models learned via FL by the users and shows an accuracy that
surpasses comparable models reported in the literature. Extensive experimental
studies on a real-world open-access dataset from Montreal demonstrate that the
proposed inference model can achieve accurate identification of users' mode of
travel without compromising privacy.
Related papers
- Partial Federated Learning [26.357723187375665]
Federated Learning (FL) is a popular algorithm to train machine learning models on user data constrained to edge devices.
We propose a new algorithm called Partial Federated Learning (PartialFL), where a machine learning model is trained using data where a subset of data modalities can be made available to the server.
arXiv Detail & Related papers (2024-03-03T21:04:36Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - GOF-TTE: Generative Online Federated Learning Framework for Travel Time
Estimation [8.05623264361826]
We introduce GOF-TTE for the mobile user group, Generative Online Federated Learning Framework for Travel Time Estimation.
We use private data to be kept on client devices while training, and designs the global model as an online generative model shared by all clients to infer the real-time road traffic state.
We also employ a simple privacy attack to our framework and implement the differential privacy mechanism to further guarantee privacy safety.
arXiv Detail & Related papers (2022-07-02T14:10:26Z) - Personalization Improves Privacy-Accuracy Tradeoffs in Federated
Optimization [57.98426940386627]
We show that coordinating local learning with private centralized learning yields a generically useful and improved tradeoff between accuracy and privacy.
We illustrate our theoretical results with experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2022-02-10T20:44:44Z) - Location Leakage in Federated Signal Maps [7.093808731951124]
We consider the problem of predicting cellular network performance (signal maps) from measurements collected by several mobile devices.
We formulate the problem within the online federated learning framework: (i) federated learning enables users to collaboratively train a model, while keeping their training data on their devices.
We consider an honest-but-curious server, who observes the updates from target users participating in FL and infers their location using a deep leakage from gradients (DLG) type of attack.
We build on this observation to protect location privacy, in our setting, by revisiting and designing mechanisms within the federated learning framework including: tuning the FL
arXiv Detail & Related papers (2021-12-07T02:28:12Z) - Personalized Federated Learning over non-IID Data for Indoor
Localization [17.03722514121803]
We consider the use of recent Federated Learning schemes to train a set of personalized models.
In this paper, we consider the use of recent FL schemes to train a set of personalized models that are then optimally fused through Bayesian rules.
arXiv Detail & Related papers (2021-07-09T03:31:16Z) - SCEI: A Smart-Contract Driven Edge Intelligence Framework for IoT
Systems [15.796325306292134]
Federated learning (FL) enables collaborative training of a shared model on edge devices while maintaining data privacy.
Various personalized approaches have been proposed, but such approaches fail to handle underlying shifts in data distribution.
This paper presents a dynamically optimized personal deep learning scheme based on blockchain and federated learning.
arXiv Detail & Related papers (2021-03-12T02:57:05Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z) - Federated Learning of User Authentication Models [69.93965074814292]
We propose Federated User Authentication (FedUA), a framework for privacy-preserving training of machine learning models.
FedUA adopts federated learning framework to enable a group of users to jointly train a model without sharing the raw inputs.
We show our method is privacy-preserving, scalable with number of users, and allows new users to be added to training without changing the output layer.
arXiv Detail & Related papers (2020-07-09T08:04:38Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.