SENTINEL: Securing Indoor Localization against Adversarial Attacks with Capsule Neural Networks
- URL: http://arxiv.org/abs/2407.11091v1
- Date: Sun, 14 Jul 2024 21:40:12 GMT
- Title: SENTINEL: Securing Indoor Localization against Adversarial Attacks with Capsule Neural Networks
- Authors: Danish Gufran, Pooja Anandathirtha, Sudeep Pasricha,
- Abstract summary: We present SENTINEL, a novel embedded machine learning framework to bolster the resilience of indoor localization solutions against adversarial attacks.
We also introduce RSSRogueLoc, a dataset capturing the effects of rogue APs from several real-world indoor environments.
- Score: 2.7186493234782527
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: With the increasing demand for edge device powered location-based services in indoor environments, Wi-Fi received signal strength (RSS) fingerprinting has become popular, given the unavailability of GPS indoors. However, achieving robust and efficient indoor localization faces several challenges, due to RSS fluctuations from dynamic changes in indoor environments and heterogeneity of edge devices, leading to diminished localization accuracy. While advances in machine learning (ML) have shown promise in mitigating these phenomena, it remains an open problem. Additionally, emerging threats from adversarial attacks on ML-enhanced indoor localization systems, especially those introduced by malicious or rogue access points (APs), can deceive ML models to further increase localization errors. To address these challenges, we present SENTINEL, a novel embedded ML framework utilizing modified capsule neural networks to bolster the resilience of indoor localization solutions against adversarial attacks, device heterogeneity, and dynamic RSS fluctuations. We also introduce RSSRogueLoc, a novel dataset capturing the effects of rogue APs from several real-world indoor environments. Experimental evaluations demonstrate that SENTINEL achieves significant improvements, with up to 3.5x reduction in mean error and 3.4x reduction in worst-case error compared to state-of-the-art frameworks using simulated adversarial attacks. SENTINEL also achieves improvements of up to 2.8x in mean error and 2.7x in worst-case error compared to state-of-the-art frameworks when evaluated with the real-world RSSRogueLoc dataset.
Related papers
- RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content [62.685566387625975]
Current mitigation strategies, while effective, are not resilient under adversarial attacks.
This paper introduces Resilient Guardrails for Large Language Models (RigorLLM), a novel framework designed to efficiently moderate harmful and unsafe inputs.
arXiv Detail & Related papers (2024-03-19T07:25:02Z) - SANGRIA: Stacked Autoencoder Neural Networks with Gradient Boosting for
Indoor Localization [3.3379026542599934]
We propose a novel fingerprintingbased framework for indoor localization called SANGRIA.
We demonstrate 42.96% lower average localization error across diverse indoor locales and heterogeneous devices.
arXiv Detail & Related papers (2024-03-03T00:01:29Z) - FILP-3D: Enhancing 3D Few-shot Class-incremental Learning with
Pre-trained Vision-Language Models [62.663113296987085]
Few-shot class-incremental learning aims to mitigate the catastrophic forgetting issue when a model is incrementally trained on limited data.
We introduce two novel components: the Redundant Feature Eliminator (RFE) and the Spatial Noise Compensator (SNC)
Considering the imbalance in existing 3D datasets, we also propose new evaluation metrics that offer a more nuanced assessment of a 3D FSCIL model.
arXiv Detail & Related papers (2023-12-28T14:52:07Z) - CALLOC: Curriculum Adversarial Learning for Secure and Robust Indoor
Localization [3.943289808718775]
We introduce CALLOC, a novel framework designed to resist adversarial attacks and variations across indoor environments and devices.
CALLOC employs a novel adaptive curriculum learning approach with a domain specific lightweight scaled-dot product attention neural network.
We show that CALLOC can achieve improvements of up to 6.03x in mean error and 4.6x in worst-case error against state-of-the-art indoor localization frameworks.
arXiv Detail & Related papers (2023-11-10T19:26:31Z) - FedHIL: Heterogeneity Resilient Federated Learning for Robust Indoor
Localization with Mobile Devices [4.226118870861363]
Indoor localization plays a vital role in applications such as emergency response, warehouse management, and augmented reality experiences.
We propose a novel embedded machine learning framework called FedHIL to improve indoor localization accuracy in device-heterogeneous environments.
Our framework combines indoor localization and federated learning (FL) to improve indoor localization accuracy in device-heterogeneous environments.
arXiv Detail & Related papers (2023-07-04T15:34:13Z) - Dynamics-aware Adversarial Attack of Adaptive Neural Networks [75.50214601278455]
We investigate the dynamics-aware adversarial attack problem of adaptive neural networks.
We propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient.
Our LGM achieves impressive adversarial attack performance compared with the dynamic-unaware attack methods.
arXiv Detail & Related papers (2022-10-15T01:32:08Z) - FIRE: A Failure-Adaptive Reinforcement Learning Framework for Edge Computing Migrations [52.85536740465277]
FIRE is a framework that adapts to rare events by training a RL policy in an edge computing digital twin environment.
We propose ImRE, an importance sampling-based Q-learning algorithm, which samples rare events proportionally to their impact on the value function.
We show that FIRE reduces costs compared to vanilla RL and the greedy baseline in the event of failures.
arXiv Detail & Related papers (2022-09-28T19:49:39Z) - Multi-Head Attention Neural Network for Smartphone Invariant Indoor
Localization [3.577310844634503]
Smartphones together with RSSI fingerprinting serve as an efficient approach for delivering a low-cost and high-accuracy indoor localization solution.
We propose a multi-head attention neural network-based indoor localization framework that is resilient to device heterogeneity.
An in-depth analysis of our proposed framework demonstrates up to 35% accuracy improvement compared to state-of-the-art indoor localization techniques.
arXiv Detail & Related papers (2022-05-17T03:08:09Z) - Dynamics-aware Adversarial Attack of 3D Sparse Convolution Network [75.1236305913734]
We investigate the dynamics-aware adversarial attack problem in deep neural networks.
Most existing adversarial attack algorithms are designed under a basic assumption -- the network architecture is fixed throughout the attack process.
We propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient.
arXiv Detail & Related papers (2021-12-17T10:53:35Z) - Siamese Neural Encoders for Long-Term Indoor Localization with Mobile
Devices [5.063728016437489]
Fingerprinting-based indoor localization is an emerging application domain for enhanced positioning and tracking of people and assets within indoor locales.
We propose a Siamese neural encoder-based framework that offers up to 40% reduction in degradation of localization accuracy over time compared to the state-of-the-art in the area.
arXiv Detail & Related papers (2021-11-28T07:22:55Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.