Robust Adversarial Attacks Detection for Deep Learning based Relative
Pose Estimation for Space Rendezvous
- URL: http://arxiv.org/abs/2311.05992v1
- Date: Fri, 10 Nov 2023 11:07:31 GMT
- Title: Robust Adversarial Attacks Detection for Deep Learning based Relative
Pose Estimation for Space Rendezvous
- Authors: Ziwei Wang, Nabil Aouf, Jose Pizarro, Christophe Honvault
- Abstract summary: We propose a novel approach for adversarial attack detection for deep neural network-based relative pose estimation schemes.
The proposed adversarial attack detector achieves a detection accuracy of 99.21%.
- Score: 8.191688622709444
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Research on developing deep learning techniques for autonomous spacecraft
relative navigation challenges is continuously growing in recent years.
Adopting those techniques offers enhanced performance. However, such approaches
also introduce heightened apprehensions regarding the trustability and security
of such deep learning methods through their susceptibility to adversarial
attacks. In this work, we propose a novel approach for adversarial attack
detection for deep neural network-based relative pose estimation schemes based
on the explainability concept. We develop for an orbital rendezvous scenario an
innovative relative pose estimation technique adopting our proposed
Convolutional Neural Network (CNN), which takes an image from the chaser's
onboard camera and outputs accurately the target's relative position and
rotation. We perturb seamlessly the input images using adversarial attacks that
are generated by the Fast Gradient Sign Method (FGSM). The adversarial attack
detector is then built based on a Long Short Term Memory (LSTM) network which
takes the explainability measure namely SHapley Value from the CNN-based pose
estimator and flags the detection of adversarial attacks when acting.
Simulation results show that the proposed adversarial attack detector achieves
a detection accuracy of 99.21%. Both the deep relative pose estimator and
adversarial attack detector are then tested on real data captured from our
laboratory-designed setup. The experimental results from our
laboratory-designed setup demonstrate that the proposed adversarial attack
detector achieves an average detection accuracy of 96.29%.
Related papers
- Detecting Adversarial Attacks in Semantic Segmentation via Uncertainty Estimation: A Deep Analysis [12.133306321357999]
We propose an uncertainty-based method for detecting adversarial attacks on neural networks for semantic segmentation.
We conduct a detailed analysis of uncertainty-based detection of adversarial attacks and various state-of-the-art neural networks.
Our numerical experiments show the effectiveness of the proposed uncertainty-based detection method.
arXiv Detail & Related papers (2024-08-19T14:13:30Z) - Fortify the Guardian, Not the Treasure: Resilient Adversarial Detectors [0.0]
An adaptive attack is one where the attacker is aware of the defenses and adapts their strategy accordingly.
Our proposed method leverages adversarial training to reinforce the ability to detect attacks, without compromising clean accuracy.
Experimental evaluations on the CIFAR-10 and SVHN datasets demonstrate that our proposed algorithm significantly improves a detector's ability to accurately identify adaptive adversarial attacks.
arXiv Detail & Related papers (2024-04-18T12:13:09Z) - ADoPT: LiDAR Spoofing Attack Detection Based on Point-Level Temporal
Consistency [11.160041268858773]
Deep neural networks (DNNs) are increasingly integrated into LiDAR-based perception systems for autonomous vehicles (AVs)
We aim to address the challenge of LiDAR spoofing attacks, where attackers inject fake objects into LiDAR data and fool AVs to misinterpret their environment and make erroneous decisions.
We propose ADoPT (Anomaly Detection based on Point-level Temporal consistency), which quantitatively measures temporal consistency across consecutive frames and identifies abnormal objects based on the coherency of point clusters.
In our evaluation using the nuScenes dataset, our algorithm effectively counters various LiDAR spoofing attacks, achieving a low (
arXiv Detail & Related papers (2023-10-23T02:31:31Z) - New Adversarial Image Detection Based on Sentiment Analysis [37.139957973240264]
adversarial attack models, e.g., DeepFool, are on the rise and outrunning adversarial example detection techniques.
This paper presents a new adversarial example detector that outperforms state-of-the-art detectors in identifying the latest adversarial attacks on image datasets.
arXiv Detail & Related papers (2023-05-03T14:32:21Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - Defending From Physically-Realizable Adversarial Attacks Through
Internal Over-Activation Analysis [61.68061613161187]
Z-Mask is a robust and effective strategy to improve the robustness of convolutional networks against adversarial attacks.
The presented defense relies on specific Z-score analysis performed on the internal network features to detect and mask the pixels corresponding to adversarial objects in the input image.
Additional experiments showed that Z-Mask is also robust against possible defense-aware attacks.
arXiv Detail & Related papers (2022-03-14T17:41:46Z) - Residual Error: a New Performance Measure for Adversarial Robustness [85.0371352689919]
A major challenge that limits the wide-spread adoption of deep learning has been their fragility to adversarial attacks.
This study presents the concept of residual error, a new performance measure for assessing the adversarial robustness of a deep neural network.
Experimental results using the case of image classification demonstrate the effectiveness and efficacy of the proposed residual error metric.
arXiv Detail & Related papers (2021-06-18T16:34:23Z) - Improving the Adversarial Robustness for Speaker Verification by Self-Supervised Learning [95.60856995067083]
This work is among the first to perform adversarial defense for ASV without knowing the specific attack algorithms.
We propose to perform adversarial defense from two perspectives: 1) adversarial perturbation purification and 2) adversarial perturbation detection.
Experimental results show that our detection module effectively shields the ASV by detecting adversarial samples with an accuracy of around 80%.
arXiv Detail & Related papers (2021-06-01T07:10:54Z) - Adversarial Examples Detection beyond Image Space [88.7651422751216]
We find that there exists compliance between perturbations and prediction confidence, which guides us to detect few-perturbation attacks from the aspect of prediction confidence.
We propose a method beyond image space by a two-stream architecture, in which the image stream focuses on the pixel artifacts and the gradient stream copes with the confidence artifacts.
arXiv Detail & Related papers (2021-02-23T09:55:03Z) - Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition [56.844587127848854]
We demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
We employ a generative adversarial network based architecture to semantically generate adversarial high-quality gait silhouettes or video frames.
The experimental results show that if only one-fortieth of the frames are attacked, the accuracy of the target model drops dramatically.
arXiv Detail & Related papers (2020-02-22T10:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.