Adversarial Attacks and Detection in Visual Place Recognition for Safer Robot Navigation
- URL: http://arxiv.org/abs/2506.15988v1
- Date: Thu, 19 Jun 2025 03:19:21 GMT
- Title: Adversarial Attacks and Detection in Visual Place Recognition for Safer Robot Navigation
- Authors: Connor Malone, Owen Claxton, Iman Shames, Michael Milford,
- Abstract summary: Stand-alone Visual Place Recognition (VPR) systems have little defence against well-designed adversarial attacks.<n>This paper extensively analyzes the effect of four adversarial attacks common in other perception tasks and four novel VPR-specific attacks on VPR localization performance.
- Score: 16.01119279073898
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Stand-alone Visual Place Recognition (VPR) systems have little defence against a well-designed adversarial attack, which can lead to disastrous consequences when deployed for robot navigation. This paper extensively analyzes the effect of four adversarial attacks common in other perception tasks and four novel VPR-specific attacks on VPR localization performance. We then propose how to close the loop between VPR, an Adversarial Attack Detector (AAD), and active navigation decisions by demonstrating the performance benefit of simulated AADs in a novel experiment paradigm -- which we detail for the robotics community to use as a system framework. In the proposed experiment paradigm, we see the addition of AADs across a range of detection accuracies can improve performance over baseline; demonstrating a significant improvement -- such as a ~50% reduction in the mean along-track localization error -- can be achieved with True Positive and False Positive detection rates of only 75% and up to 25% respectively. We examine a variety of metrics including: Along-Track Error, Percentage of Time Attacked, Percentage of Time in an `Unsafe' State, and Longest Continuous Time Under Attack. Expanding further on these results, we provide the first investigation into the efficacy of the Fast Gradient Sign Method (FGSM) adversarial attack for VPR. The analysis in this work highlights the need for AADs in real-world systems for trustworthy navigation, and informs quantitative requirements for system design.
Related papers
- Active Test-time Vision-Language Navigation [60.69722522420299]
ATENA is a test-time active learning framework that enables a practical human-robot interaction via episodic feedback on uncertain navigation outcomes.<n>In particular, ATENA learns to increase certainty in successful episodes and decrease it in failed ones, improving uncertainty calibration.<n>In addition, we propose a self-active learning strategy that enables an agent to evaluate its navigation outcomes based on confident predictions.
arXiv Detail & Related papers (2025-06-07T02:24:44Z) - FlippedRAG: Black-Box Opinion Manipulation Adversarial Attacks to Retrieval-Augmented Generation Models [22.35026334463735]
We propose FlippedRAG, a transfer-based adversarial attack against black-box RAG systems.<n>FlippedRAG achieves on average a 50% directional shift in the opinion of RAG-generated responses.<n>These results highlight an urgent need for developing innovative defensive solutions to ensure the security and trustworthiness of RAG systems.
arXiv Detail & Related papers (2025-01-06T12:24:57Z) - Improving Visual Place Recognition Based Robot Navigation By Verifying Localization Estimates [14.354164363224529]
This research introduces a novel Multi-Layer Perceptron (MLP) integrity monitor.
It demonstrates improved performance and generalizability, removing per-environment training and reducing manual tuning requirements.
We test our proposed system in extensive real-world experiments.
arXiv Detail & Related papers (2024-07-11T03:47:14Z) - Enhancing Object Detection Robustness: Detecting and Restoring Confidence in the Presence of Adversarial Patch Attacks [2.963101656293054]
This study evaluates defense mechanisms for the YOLOv5 model against adversarial patches.<n>We tested several defenses, including Segment and Complete (SAC), Inpainting, and Latent Diffusion Models.<n>Results indicate that adversarial patches reduce average detection confidence by 22.06%.
arXiv Detail & Related papers (2024-03-04T13:32:48Z) - ADVENT: Attack/Anomaly Detection in VANETs [0.8594140167290099]
This study introduces a system for real-time detection of malicious behavior.
By seamlessly integrating statistical and machine learning techniques, the proposed system prioritizes simplicity and efficiency.
It excels in swiftly detecting attack onsets with a remarkable F1-score of 99.66%, subsequently identifying malicious vehicles with an average F1-score of approximately 97.85%.
arXiv Detail & Related papers (2024-01-16T18:49:08Z) - Robust Adversarial Attacks Detection for Deep Learning based Relative
Pose Estimation for Space Rendezvous [8.191688622709444]
We propose a novel approach for adversarial attack detection for deep neural network-based relative pose estimation schemes.
The proposed adversarial attack detector achieves a detection accuracy of 99.21%.
arXiv Detail & Related papers (2023-11-10T11:07:31Z) - Improving the Adversarial Robustness for Speaker Verification by Self-Supervised Learning [95.60856995067083]
This work is among the first to perform adversarial defense for ASV without knowing the specific attack algorithms.
We propose to perform adversarial defense from two perspectives: 1) adversarial perturbation purification and 2) adversarial perturbation detection.
Experimental results show that our detection module effectively shields the ASV by detecting adversarial samples with an accuracy of around 80%.
arXiv Detail & Related papers (2021-06-01T07:10:54Z) - Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting [61.99564267735242]
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
arXiv Detail & Related papers (2021-04-22T05:10:55Z) - Investigating Robustness of Adversarial Samples Detection for Automatic
Speaker Verification [78.51092318750102]
This work proposes to defend ASV systems against adversarial attacks with a separate detection network.
A VGG-like binary classification detector is introduced and demonstrated to be effective on detecting adversarial samples.
arXiv Detail & Related papers (2020-06-11T04:31:56Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Reliable evaluation of adversarial robustness with an ensemble of
diverse parameter-free attacks [65.20660287833537]
In this paper we propose two extensions of the PGD-attack overcoming failures due to suboptimal step size and problems of the objective function.
We then combine our novel attacks with two complementary existing ones to form a parameter-free, computationally affordable and user-independent ensemble of attacks to test adversarial robustness.
arXiv Detail & Related papers (2020-03-03T18:15:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.