Attacking the Loop: Adversarial Attacks on Graph-based Loop Closure
Detection
- URL: http://arxiv.org/abs/2312.06991v1
- Date: Tue, 12 Dec 2023 05:23:15 GMT
- Title: Attacking the Loop: Adversarial Attacks on Graph-based Loop Closure
Detection
- Authors: Jonathan J.Y. Kim, Martin Urschler, Patricia J. Riddle and Jorg S.
Wicker
- Abstract summary: Loop Closure Detection (LCD) is a crucial component in visual SLAM (vSLAM)
We present Adversarial-LCD, a novel black-box evasion attack framework that employs an eigencentrality-based perturbation method.
Our evaluation shows that the attack performance of Adversarial-LCD with the SVM-RBF surrogate model was superior to that of other machine learning surrogate algorithms.
- Score: 1.1060425537315086
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: With the advancement in robotics, it is becoming increasingly common for
large factories and warehouses to incorporate visual SLAM (vSLAM) enabled
automated robots that operate closely next to humans. This makes any
adversarial attacks on vSLAM components potentially detrimental to humans
working alongside them. Loop Closure Detection (LCD) is a crucial component in
vSLAM that minimizes the accumulation of drift in mapping, since even a small
drift can accumulate into a significant drift over time. A prior work by Kim et
al., SymbioLCD2, unified visual features and semantic objects into a single
graph structure for finding loop closure candidates. While this provided a
performance improvement over visual feature-based LCD, it also created a single
point of vulnerability for potential graph-based adversarial attacks. Unlike
previously reported visual-patch based attacks, small graph perturbations are
far more challenging to detect, making them a more significant threat. In this
paper, we present Adversarial-LCD, a novel black-box evasion attack framework
that employs an eigencentrality-based perturbation method and an SVM-RBF
surrogate model with a Weisfeiler-Lehman feature extractor for attacking
graph-based LCD. Our evaluation shows that the attack performance of
Adversarial-LCD with the SVM-RBF surrogate model was superior to that of other
machine learning surrogate algorithms, including SVM-linear, SVM-polynomial,
and Bayesian classifier, demonstrating the effectiveness of our attack
framework. Furthermore, we show that our eigencentrality-based perturbation
method outperforms other algorithms, such as Random-walk and Shortest-path,
highlighting the efficiency of Adversarial-LCD's perturbation selection method.
Related papers
- On the Adversarial Robustness of Discrete Image Tokenizers [56.377796750281796]
We first formulate attacks that aim to perturb the features extracted by discrete tokenizers, and thus change the extracted tokens.<n>We fine-tune popular tokenizers with unsupervised adversarial training, keeping all other components frozen.<n>Our approach significantly improves robustness to both unsupervised and end-to-end supervised attacks and generalizes well to unseen tasks and data.
arXiv Detail & Related papers (2026-02-20T14:39:17Z) - Chameleon: Adaptive Adversarial Agents for Scaling-Based Visual Prompt Injection in Multimodal AI Systems [0.0]
We propose a novel, adaptive adversarial framework designed to expose and exploit scaling vulnerabilities in production Vision-Language Models (VLMs)<n>Our experiments demonstrate that Chameleon achieves an Attack Success Rate (ASR) of 84.5% across varying scaling factors.<n>We show that these attacks effectively compromise agentic pipelines, reducing decision-making accuracy by over 45% in multi-step tasks.
arXiv Detail & Related papers (2025-12-04T15:22:28Z) - Model-agnostic Adversarial Attack and Defense for Vision-Language-Action Models [25.45513133247862]
Vision-Language-Action (VLA) models have achieved revolutionary progress in robot learning.<n>Despite this progress, their adversarial robustness remains underexplored.<n>We propose both adversarial patch attack and corresponding defense strategies for VLA models.
arXiv Detail & Related papers (2025-10-15T07:42:44Z) - RECALLED: An Unbounded Resource Consumption Attack on Large Vision-Language Models [16.62034667623657]
Resource Consumption Attacks (RCAs) have emerged as a significant threat to the deployment of Large Language Models (LLMs)<n>We present RECALLED, the first approach for exploiting visual modalities to trigger RCAs red-teaming.<n>Our study exposes security vulnerabilities in LVLMs and establishes a red-teaming framework that can facilitate future defense development against RCAs.
arXiv Detail & Related papers (2025-07-24T02:58:16Z) - ORCHID: Streaming Threat Detection over Versioned Provenance Graphs [11.783370157959968]
We present ORCHID, a novel Prov-IDS that performs fine-grained detection of process-level threats over a real time event stream.
ORCHID takes advantage of the unique immutable properties of a versioned provenance graphs to iteratively embed the entire graph in a sequential RNN model.
We evaluate ORCHID on four public datasets, including DARPA TC, to show that ORCHID can provide competitive classification performance.
arXiv Detail & Related papers (2024-08-23T19:44:40Z) - Detecting Masquerade Attacks in Controller Area Networks Using Graph Machine Learning [0.2812395851874055]
This paper introduces a novel framework for detecting masquerade attacks in the CAN bus using graph machine learning (ML)
We show that by representing CAN bus frames as message sequence graphs (MSGs) and enriching each node with contextual statistical attributes from time series, we can enhance detection capabilities.
Our method ensures a comprehensive and dynamic analysis of CAN frame interactions, improving robustness and efficiency.
arXiv Detail & Related papers (2024-08-10T04:17:58Z) - Sparse and Transferable Universal Singular Vectors Attack [5.498495800909073]
We propose a novel sparse universal white-box adversarial attack.
Our approach is based on truncated power providing sparsity to $(p,q)$-singular vectors of the hidden layers of Jacobian matrices.
Our findings demonstrate the vulnerability of state-of-the-art models to sparse attacks and highlight the importance of developing robust machine learning systems.
arXiv Detail & Related papers (2024-01-25T09:21:29Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - To Make Yourself Invisible with Adversarial Semantic Contours [47.755808439588094]
Adversarial Semantic Contour (ASC) is an estimate of a Bayesian formulation of sparse attack with a deceived prior of object contour.
We show that ASC can corrupt the prediction of 9 modern detectors with different architectures.
We conclude with cautions about contour being the common weakness of object detectors with various architecture.
arXiv Detail & Related papers (2023-03-01T07:22:39Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm [93.80082636284922]
Sparse adversarial attacks can fool deep networks (DNNs) by only perturbing a few pixels.
Recent efforts combine it with another l_infty perturbation on magnitudes.
We propose a homotopy algorithm to tackle the sparsity and neural perturbation framework.
arXiv Detail & Related papers (2021-06-10T20:11:36Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.