Evaluating the Robustness of Off-Road Autonomous Driving Segmentation
against Adversarial Attacks: A Dataset-Centric analysis
- URL: http://arxiv.org/abs/2402.02154v1
- Date: Sat, 3 Feb 2024 13:48:57 GMT
- Title: Evaluating the Robustness of Off-Road Autonomous Driving Segmentation
against Adversarial Attacks: A Dataset-Centric analysis
- Authors: Pankaj Deoli, Rohit Kumar, Axel Vierling, Karsten Berns
- Abstract summary: This study investigates the vulnerability of semantic segmentation models to adversarial input perturbations.
We compare the effects of adversarial attacks on different segmentation network architectures.
This work contributes to the safe navigation of autonomous robot Unimog U5023 in rough off-road unstructured environments.
- Score: 1.6538732383658392
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study investigates the vulnerability of semantic segmentation models to
adversarial input perturbations, in the domain of off-road autonomous driving.
Despite good performance in generic conditions, the state-of-the-art
classifiers are often susceptible to (even) small perturbations, ultimately
resulting in inaccurate predictions with high confidence. Prior research has
directed their focus on making models more robust by modifying the architecture
and training with noisy input images, but has not explored the influence of
datasets in adversarial attacks. Our study aims to address this gap by
examining the impact of non-robust features in off-road datasets and comparing
the effects of adversarial attacks on different segmentation network
architectures. To enable this, a robust dataset is created consisting of only
robust features and training the networks on this robustified dataset. We
present both qualitative and quantitative analysis of our findings, which have
important implications on improving the robustness of machine learning models
in off-road autonomous driving applications. Additionally, this work
contributes to the safe navigation of autonomous robot Unimog U5023 in rough
off-road unstructured environments by evaluating the robustness of segmentation
outputs. The code is publicly available at
https://github.com/rohtkumar/adversarial_attacks_ on_segmentation
Related papers
- Revisiting Generative Adversarial Networks for Binary Semantic
Segmentation on Imbalanced Datasets [20.538287907723713]
Anomalous crack region detection is a typical binary semantic segmentation task, which aims to detect pixels representing cracks on pavement surface images automatically by algorithms.
Existing deep learning-based methods have achieved outcoming results on specific public pavement datasets, but the performance would deteriorate dramatically on imbalanced datasets.
We propose a deep learning framework based on conditional Generative Adversarial Networks (cGANs) for the anomalous crack region detection tasks at the pixel level.
arXiv Detail & Related papers (2024-02-03T19:24:40Z) - Reliability in Semantic Segmentation: Can We Use Synthetic Data? [69.28268603137546]
We show for the first time how synthetic data can be specifically generated to assess comprehensively the real-world reliability of semantic segmentation models.
This synthetic data is employed to evaluate the robustness of pretrained segmenters.
We demonstrate how our approach can be utilized to enhance the calibration and OOD detection capabilities of segmenters.
arXiv Detail & Related papers (2023-12-14T18:56:07Z) - Adversary ML Resilience in Autonomous Driving Through Human Centered
Perception Mechanisms [0.0]
This paper explores the resilience of autonomous driving systems against three main physical adversarial attacks (tape, graffiti, illumination)
To build robustness against attacks, defense techniques like adversarial training and transfer learning were implemented.
Results demonstrated transfer learning models played a crucial role in performance by allowing knowledge gained from shape training to improve generalizability of road sign classification.
arXiv Detail & Related papers (2023-11-02T04:11:45Z) - Learning to Generate Training Datasets for Robust Semantic Segmentation [37.9308918593436]
We propose a novel approach to improve the robustness of semantic segmentation techniques.
We design Robusta, a novel conditional generative adversarial network to generate realistic and plausible perturbed images.
Our results suggest that this approach could be valuable in safety-critical applications.
arXiv Detail & Related papers (2023-08-01T10:02:26Z) - Robustness Benchmark of Road User Trajectory Prediction Models for
Automated Driving [0.0]
We benchmark machine learning models against perturbations that simulate functional insufficiencies observed during model deployment in a vehicle.
Training the models with similar perturbations effectively reduces performance degradation, with error increases of up to +87.5%.
We argue that despite being an effective mitigation strategy, data augmentation through perturbations during training does not guarantee robustness towards unforeseen perturbations.
arXiv Detail & Related papers (2023-04-04T15:47:42Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - Robust Trajectory Prediction against Adversarial Attacks [84.10405251683713]
Trajectory prediction using deep neural networks (DNNs) is an essential component of autonomous driving systems.
These methods are vulnerable to adversarial attacks, leading to serious consequences such as collisions.
In this work, we identify two key ingredients to defend trajectory prediction models against adversarial attacks.
arXiv Detail & Related papers (2022-07-29T22:35:05Z) - CausalAgents: A Robustness Benchmark for Motion Forecasting using Causal
Relationships [8.679073301435265]
We construct a new benchmark for evaluating and improving model robustness by applying perturbations to existing data.
We use these labels to perturb the data by deleting non-causal agents from the scene.
Under non-causal perturbations, we observe a $25$-$38%$ relative change in minADE as compared to the original.
arXiv Detail & Related papers (2022-07-07T21:28:23Z) - CARLA-GeAR: a Dataset Generator for a Systematic Evaluation of
Adversarial Robustness of Vision Models [61.68061613161187]
This paper presents CARLA-GeAR, a tool for the automatic generation of synthetic datasets for evaluating the robustness of neural models against physical adversarial patches.
The tool is built on the CARLA simulator, using its Python API, and allows the generation of datasets for several vision tasks in the context of autonomous driving.
The paper presents an experimental study to evaluate the performance of some defense methods against such attacks, showing how the datasets generated with CARLA-GeAR might be used in future work as a benchmark for adversarial defense in the real world.
arXiv Detail & Related papers (2022-06-09T09:17:38Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.