Improved Noise and Attack Robustness for Semantic Segmentation by Using
Multi-Task Training with Self-Supervised Depth Estimation
- URL: http://arxiv.org/abs/2004.11072v1
- Date: Thu, 23 Apr 2020 11:03:56 GMT
- Title: Improved Noise and Attack Robustness for Semantic Segmentation by Using
Multi-Task Training with Self-Supervised Depth Estimation
- Authors: Marvin Klingner, Andreas B\"ar, Tim Fingscheidt
- Abstract summary: We propose to improve robustness by a multi-task training, which extends supervised semantic segmentation by a self-supervised monocular depth estimation on unlabeled videos.
We show the effectiveness of our method on the Cityscapes dataset, where our multi-task training approach consistently outperforms the single-task semantic segmentation baseline.
- Score: 39.99513327031499
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While current approaches for neural network training often aim at improving
performance, less focus is put on training methods aiming at robustness towards
varying noise conditions or directed attacks by adversarial examples. In this
paper, we propose to improve robustness by a multi-task training, which extends
supervised semantic segmentation by a self-supervised monocular depth
estimation on unlabeled videos. This additional task is only performed during
training to improve the semantic segmentation model's robustness at test time
under several input perturbations. Moreover, we even find that our joint
training approach also improves the performance of the model on the original
(supervised) semantic segmentation task. Our evaluation exhibits a particular
novelty in that it allows to mutually compare the effect of input noises and
adversarial attacks on the robustness of the semantic segmentation. We show the
effectiveness of our method on the Cityscapes dataset, where our multi-task
training approach consistently outperforms the single-task semantic
segmentation baseline in terms of both robustness vs. noise and in terms of
adversarial attacks, without the need for depth labels in training.
Related papers
- Learning to Generate Training Datasets for Robust Semantic Segmentation [37.9308918593436]
We propose a novel approach to improve the robustness of semantic segmentation techniques.
We design Robusta, a novel conditional generative adversarial network to generate realistic and plausible perturbed images.
Our results suggest that this approach could be valuable in safety-critical applications.
arXiv Detail & Related papers (2023-08-01T10:02:26Z) - SegPGD: An Effective and Efficient Adversarial Attack for Evaluating and
Boosting Segmentation Robustness [63.726895965125145]
Deep neural network-based image classifications are vulnerable to adversarial perturbations.
In this work, we propose an effective and efficient segmentation attack method, dubbed SegPGD.
Since SegPGD can create more effective adversarial examples, the adversarial training with our SegPGD can boost the robustness of segmentation models.
arXiv Detail & Related papers (2022-07-25T17:56:54Z) - Learn to Adapt for Monocular Depth Estimation [17.887575611570394]
We propose an adversarial depth estimation task and train the model in the pipeline of meta-learning.
Our method adapts well to new datasets after few training steps during the test procedure.
arXiv Detail & Related papers (2022-03-26T06:49:22Z) - Dense Contrastive Visual-Linguistic Pretraining [53.61233531733243]
Several multimodal representation learning approaches have been proposed that jointly represent image and text.
These approaches achieve superior performance by capturing high-level semantic information from large-scale multimodal pretraining.
We propose unbiased Dense Contrastive Visual-Linguistic Pretraining to replace the region regression and classification with cross-modality region contrastive learning.
arXiv Detail & Related papers (2021-09-24T07:20:13Z) - Learning to Relate Depth and Semantics for Unsupervised Domain
Adaptation [87.1188556802942]
We present an approach for encoding visual task relationships to improve model performance in an Unsupervised Domain Adaptation (UDA) setting.
We propose a novel Cross-Task Relation Layer (CTRL), which encodes task dependencies between the semantic and depth predictions.
Furthermore, we propose an Iterative Self-Learning (ISL) training scheme, which exploits semantic pseudo-labels to provide extra supervision on the target domain.
arXiv Detail & Related papers (2021-05-17T13:42:09Z) - Consistency Training with Virtual Adversarial Discrete Perturbation [17.311821099484987]
We propose an effective consistency training framework that enforces a training model's predictions given original and perturbed inputs to be similar.
This virtual adversarial discrete noise obtained by replacing a small portion of tokens efficiently pushes a training model's decision boundary.
arXiv Detail & Related papers (2021-04-15T07:49:43Z) - Stylized Adversarial Defense [105.88250594033053]
adversarial training creates perturbation patterns and includes them in the training set to robustify the model.
We propose to exploit additional information from the feature space to craft stronger adversaries.
Our adversarial training approach demonstrates strong robustness compared to state-of-the-art defenses.
arXiv Detail & Related papers (2020-07-29T08:38:10Z) - Semantics-Driven Unsupervised Learning for Monocular Depth and
Ego-Motion Estimation [33.83396613039467]
We propose a semantics-driven unsupervised learning approach for monocular depth and ego-motion estimation from videos.
Recent unsupervised learning methods employ photometric errors between synthetic view and actual image as a supervision signal for training.
arXiv Detail & Related papers (2020-06-08T05:55:07Z) - Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic
Segmentation [79.42338812621874]
Adversarial training is promising for improving robustness of deep neural networks towards adversarial perturbations.
We formulate a general adversarial training procedure that can perform decently on both adversarial and clean samples.
We propose a dynamic divide-and-conquer adversarial training (DDC-AT) strategy to enhance the defense effect.
arXiv Detail & Related papers (2020-03-14T05:06:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.