SAFELOC: Overcoming Data Poisoning Attacks in Heterogeneous Federated Machine Learning for Indoor Localization
- URL: http://arxiv.org/abs/2411.09055v1
- Date: Wed, 13 Nov 2024 22:28:05 GMT
- Title: SAFELOC: Overcoming Data Poisoning Attacks in Heterogeneous Federated Machine Learning for Indoor Localization
- Authors: Akhil Singampalli, Danish Gufran, Sudeep Pasricha,
- Abstract summary: Machine learning (ML) based indoor localization solutions are critical for many emerging applications.
Their efficacy is often compromised by hardware/software variations across mobile devices and the threat of ML data poisoning attacks.
We introduce SAFELOC, a novel framework that not only minimizes localization errors under these challenging conditions but also ensures model compactness for efficient mobile device deployment.
- Score: 2.9699290794642366
- License:
- Abstract: Machine learning (ML) based indoor localization solutions are critical for many emerging applications, yet their efficacy is often compromised by hardware/software variations across mobile devices (i.e., device heterogeneity) and the threat of ML data poisoning attacks. Conventional methods aimed at countering these challenges show limited resilience to the uncertainties created by these phenomena. In response, in this paper, we introduce SAFELOC, a novel framework that not only minimizes localization errors under these challenging conditions but also ensures model compactness for efficient mobile device deployment. Our framework targets a distributed and co-operative learning environment that uses federated learning (FL) to preserve user data privacy and assumes heterogeneous mobile devices carried by users (just like in most real-world scenarios). Within this heterogeneous FL context, SAFELOC introduces a novel fused neural network architecture that performs data poisoning detection and localization, with a low model footprint. Additionally, a dynamic saliency map-based aggregation strategy is designed to adapt based on the severity of the detected data poisoning scenario. Experimental evaluations demonstrate that SAFELOC achieves improvements of up to 5.9x in mean localization error, 7.8x in worst-case localization error, and a 2.1x reduction in model inference latency compared to state-of-the-art indoor localization frameworks, across diverse building floorplans, mobile devices, and ML data poisoning attack scenarios.
Related papers
- SENTINEL: Securing Indoor Localization against Adversarial Attacks with Capsule Neural Networks [2.7186493234782527]
We present SENTINEL, a novel embedded machine learning framework to bolster the resilience of indoor localization solutions against adversarial attacks.
We also introduce RSSRogueLoc, a dataset capturing the effects of rogue APs from several real-world indoor environments.
arXiv Detail & Related papers (2024-07-14T21:40:12Z) - Recovering Labels from Local Updates in Federated Learning [14.866327821524854]
Gradient (GI) attacks present a threat to the privacy of clients in federated learning (FL)
We present a novel label recovery scheme, Recovering Labels from Local Updates (RLU)
RLU achieves high performance even in realistic real-world settings where an FL system run multiple local epochs train on heterogeneous data.
arXiv Detail & Related papers (2024-05-02T02:33:15Z) - FedMID: A Data-Free Method for Using Intermediate Outputs as a Defense Mechanism Against Poisoning Attacks in Federated Learning [17.796469530291954]
Federated learning combines local updates from clients to produce a global model, which is susceptible to poisoning attacks.
We present a new paradigm to defend against poisoning attacks in federated learning using functional mappings of local models based on intermediate outputs.
arXiv Detail & Related papers (2024-04-18T05:10:05Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Data-Agnostic Model Poisoning against Federated Learning: A Graph
Autoencoder Approach [65.2993866461477]
This paper proposes a data-agnostic, model poisoning attack on Federated Learning (FL)
The attack requires no knowledge of FL training data and achieves both effectiveness and undetectability.
Experiments show that the FL accuracy drops gradually under the proposed attack and existing defense mechanisms fail to detect it.
arXiv Detail & Related papers (2023-11-30T12:19:10Z) - CALLOC: Curriculum Adversarial Learning for Secure and Robust Indoor
Localization [3.943289808718775]
We introduce CALLOC, a novel framework designed to resist adversarial attacks and variations across indoor environments and devices.
CALLOC employs a novel adaptive curriculum learning approach with a domain specific lightweight scaled-dot product attention neural network.
We show that CALLOC can achieve improvements of up to 6.03x in mean error and 4.6x in worst-case error against state-of-the-art indoor localization frameworks.
arXiv Detail & Related papers (2023-11-10T19:26:31Z) - Filling the Missing: Exploring Generative AI for Enhanced Federated
Learning over Heterogeneous Mobile Edge Devices [72.61177465035031]
We propose a generative AI-empowered federated learning to address these challenges by leveraging the idea of FIlling the MIssing (FIMI) portion of local data.
Experiment results demonstrate that FIMI can save up to 50% of the device-side energy to achieve the target global test accuracy.
arXiv Detail & Related papers (2023-10-21T12:07:04Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - FedHIL: Heterogeneity Resilient Federated Learning for Robust Indoor
Localization with Mobile Devices [4.226118870861363]
Indoor localization plays a vital role in applications such as emergency response, warehouse management, and augmented reality experiences.
We propose a novel embedded machine learning framework called FedHIL to improve indoor localization accuracy in device-heterogeneous environments.
Our framework combines indoor localization and federated learning (FL) to improve indoor localization accuracy in device-heterogeneous environments.
arXiv Detail & Related papers (2023-07-04T15:34:13Z) - FedHiSyn: A Hierarchical Synchronous Federated Learning Framework for
Resource and Data Heterogeneity [56.82825745165945]
Federated Learning (FL) enables training a global model without sharing the decentralized raw data stored on multiple devices to protect data privacy.
We propose a hierarchical synchronous FL framework, i.e., FedHiSyn, to tackle the problems of straggler effects and outdated models.
We evaluate the proposed framework based on MNIST, EMNIST, CIFAR10 and CIFAR100 datasets and diverse heterogeneous settings of devices.
arXiv Detail & Related papers (2022-06-21T17:23:06Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.