FeMLoc: Federated Meta-learning for Adaptive Wireless Indoor Localization Tasks in IoT Networks
- URL: http://arxiv.org/abs/2405.11079v1
- Date: Fri, 17 May 2024 20:22:39 GMT
- Title: FeMLoc: Federated Meta-learning for Adaptive Wireless Indoor Localization Tasks in IoT Networks
- Authors: Yaya Etiabi, Wafa Njima, El Mehdi Amhoud,
- Abstract summary: FeMLoc is a federated meta-learning framework for localization.
FeMLoc operates in two stages: (i) collaborative meta-training where a global meta-model is created by training on diverse localization datasets from edge devices.
For a target accuracy of around 5m, FeMLoc achieves the same level of accuracy up to 82.21% faster than the baseline NN approach.
- Score: 1.0650780147044159
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid growth of the Internet of Things fosters collaboration among connected devices for tasks like indoor localization. However, existing indoor localization solutions struggle with dynamic and harsh conditions, requiring extensive data collection and environment-specific calibration. These factors impede cooperation, scalability, and the utilization of prior research efforts. To address these challenges, we propose FeMLoc, a federated meta-learning framework for localization. FeMLoc operates in two stages: (i) collaborative meta-training where a global meta-model is created by training on diverse localization datasets from edge devices. (ii) Rapid adaptation for new environments, where the pre-trained global meta-model initializes the localization model, requiring only minimal fine-tuning with a small amount of new data. In this paper, we provide a detailed technical overview of FeMLoc, highlighting its unique approach to privacy-preserving meta-learning in the context of indoor localization. Our performance evaluation demonstrates the superiority of FeMLoc over state-of-the-art methods, enabling swift adaptation to new indoor environments with reduced calibration effort. Specifically, FeMLoc achieves up to 80.95% improvement in localization accuracy compared to the conventional baseline neural network (NN) approach after only 100 gradient steps. Alternatively, for a target accuracy of around 5m, FeMLoc achieves the same level of accuracy up to 82.21% faster than the baseline NN approach. This translates to FeMLoc requiring fewer training iterations, thereby significantly reducing fingerprint data collection and calibration efforts. Moreover, FeMLoc exhibits enhanced scalability, making it well-suited for location-aware massive connectivity driven by emerging wireless communication technologies.
Related papers
- NTK-DFL: Enhancing Decentralized Federated Learning in Heterogeneous Settings via Neural Tangent Kernel [27.92271597111756]
Decentralized federated learning (DFL) is a collaborative machine learning framework for training a model across participants without a central server or raw data exchange.
Recent work has shown that the neural tangent kernel (NTK) approach, when applied to federated learning in a centralized framework, can lead to improved performance.
We propose an approach leveraging the NTK to train client models in the decentralized setting, while introducing a synergy between NTK-based evolution and model averaging.
arXiv Detail & Related papers (2024-10-02T18:19:28Z) - Ferret: Federated Full-Parameter Tuning at Scale for Large Language Models [54.02863371927658]
Large Language Models (LLMs) have become indispensable in numerous real-world applications.
Ferret is the first first-order method with shared randomness.
It achieves high computational efficiency, reduced communication overhead, and fast convergence.
arXiv Detail & Related papers (2024-09-10T07:28:13Z) - Meta-FL: A Novel Meta-Learning Framework for Optimizing Heterogeneous Model Aggregation in Federated Learning [2.28438857884398]
Federated Learning (FL) enables collaborative model training across diverse entities.
FL faces challenges such as data heterogeneity and model diversity.
The Meta-Federated Learning framework has been introduced to tackle these challenges.
arXiv Detail & Related papers (2024-06-23T06:57:07Z) - Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates [71.81037644563217]
Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning.
As some of the devices may have limited computational resources and varying availability, FL latency is highly sensitive to stragglers.
We propose straggler-aware layer-wise federated learning (SALF) that leverages the optimization procedure of NNs via backpropagation to update the global model in a layer-wise fashion.
arXiv Detail & Related papers (2024-03-27T09:14:36Z) - Federated Meta-Learning for Few-Shot Fault Diagnosis with Representation
Encoding [21.76802204235636]
We propose representation encoding-based federated meta-learning (REFML) for few-shot fault diagnosis.
REFML harnesses the inherent generalization among training clients, effectively transforming it into an advantage for out-of-distribution.
It achieves an increase in accuracy by 2.17%-6.50% when tested on unseen working conditions of the same equipment type and 13.44%-18.33% when tested on totally unseen equipment types.
arXiv Detail & Related papers (2023-10-13T10:48:28Z) - A Meta-learning based Generalizable Indoor Localization Model using
Channel State Information [8.302375673936387]
We propose a new meta-learning algorithm, TB-MAML, intended to further improve generalizability when the dataset is limited.
We evaluate the performance of TB-MAML-based localization against conventionally trained localization models and localization done using other meta-learning algorithms.
arXiv Detail & Related papers (2023-05-22T19:54:59Z) - Contextual Squeeze-and-Excitation for Efficient Few-Shot Image
Classification [57.36281142038042]
We present a new adaptive block called Contextual Squeeze-and-Excitation (CaSE) that adjusts a pretrained neural network on a new task to significantly improve performance.
We also present a new training protocol based on Coordinate-Descent called UpperCaSE that exploits meta-trained CaSE blocks and fine-tuning routines for efficient adaptation.
arXiv Detail & Related papers (2022-06-20T15:25:08Z) - Multi-Edge Server-Assisted Dynamic Federated Learning with an Optimized
Floating Aggregation Point [51.47520726446029]
cooperative edge learning (CE-FL) is a distributed machine learning architecture.
We model the processes taken during CE-FL, and conduct analytical training.
We show the effectiveness of our framework with the data collected from a real-world testbed.
arXiv Detail & Related papers (2022-03-26T00:41:57Z) - FedDC: Federated Learning with Non-IID Data via Local Drift Decoupling
and Correction [48.85303253333453]
Federated learning (FL) allows multiple clients to collectively train a high-performance global model without sharing their private data.
We propose a novel federated learning algorithm with local drift decoupling and correction (FedDC)
Our FedDC only introduces lightweight modifications in the local training phase, in which each client utilizes an auxiliary local drift variable to track the gap between the local model parameter and the global model parameters.
Experiment results and analysis demonstrate that FedDC yields expediting convergence and better performance on various image classification tasks.
arXiv Detail & Related papers (2022-03-22T14:06:26Z) - Fed-LAMB: Layerwise and Dimensionwise Locally Adaptive Optimization
Algorithm [24.42828071396353]
In the emerging paradigm of federated learning (FL), large amount of clients, such as mobile devices, are used to train on their respective data.
Due to the low bandwidth, decentralized optimization methods need to shift the computation burden from those clients to those servers.
We present Fed-LAMB, a novel learning method based on a layerwise, deep neural networks.
arXiv Detail & Related papers (2021-10-01T16:54:31Z) - FedMix: Approximation of Mixup under Mean Augmented Federated Learning [60.503258658382]
Federated learning (FL) allows edge devices to collectively learn a model without directly sharing data within each device.
Current state-of-the-art algorithms suffer from performance degradation as the heterogeneity of local data across clients increases.
We propose a new augmentation algorithm, named FedMix, which is inspired by a phenomenal yet simple data augmentation method, Mixup.
arXiv Detail & Related papers (2021-07-01T06:14:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.