Transparency Distortion Robustness for SOTA Image Segmentation Tasks
- URL: http://arxiv.org/abs/2405.12864v1
- Date: Tue, 21 May 2024 15:30:25 GMT
- Title: Transparency Distortion Robustness for SOTA Image Segmentation Tasks
- Authors: Volker Knauthe, Arne Rak, Tristan Wirth, Thomas Pöllabauer, Simon Metzler, Arjan Kuijper, Dieter W. Fellner,
- Abstract summary: We propose a method to synthetically augment existing datasets with spatially varying distortions.
Our experiments show, that these distortion effects degrade the performance of state-of-the-art segmentation models.
- Score: 4.1119273264193685
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semantic Image Segmentation facilitates a multitude of real-world applications ranging from autonomous driving over industrial process supervision to vision aids for human beings. These models are usually trained in a supervised fashion using example inputs. Distribution Shifts between these examples and the inputs in operation may cause erroneous segmentations. The robustness of semantic segmentation models against distribution shifts caused by differing camera or lighting setups, lens distortions, adversarial inputs and image corruptions has been topic of recent research. However, robustness against spatially varying radial distortion effects that can be caused by uneven glass structures (e.g. windows) or the chaotic refraction in heated air has not been addressed by the research community yet. We propose a method to synthetically augment existing datasets with spatially varying distortions. Our experiments show, that these distortion effects degrade the performance of state-of-the-art segmentation models. Pretraining and enlarged model capacities proof to be suitable strategies for mitigating performance degradation to some degree, while fine-tuning on distorted images only leads to marginal performance improvements.
Related papers
- Denoising as Adaptation: Noise-Space Domain Adaptation for Image Restoration [64.84134880709625]
We show that it is possible to perform domain adaptation via the noise space using diffusion models.
In particular, by leveraging the unique property of how auxiliary conditional inputs influence the multi-step denoising process, we derive a meaningful diffusion loss.
We present crucial strategies such as channel-shuffling layer and residual-swapping contrastive learning in the diffusion model.
arXiv Detail & Related papers (2024-06-26T17:40:30Z) - Adversarial Examples are Misaligned in Diffusion Model Manifolds [7.979892202477701]
This study is dedicated to the investigation of adversarial attacks through the lens of diffusion models.
Our focus lies in utilizing the diffusion model to detect and analyze the anomalies introduced by these attacks on images.
Results demonstrate a notable capacity to discriminate effectively between benign and attacked images.
arXiv Detail & Related papers (2024-01-12T15:29:21Z) - Investigating Shift Equivalence of Convolutional Neural Networks in
Industrial Defect Segmentation [3.843350895842836]
In industrial defect segmentation tasks, output consistency (also referred to equivalence) of the model is often overlooked.
A novel pair of down/upsampling layers called component attention polyphase sampling (CAPS) is proposed as a replacement for the conventional sampling layers in CNNs.
The experimental results on the micro surface defect (MSD) dataset and four real-world industrial defect datasets demonstrate that the proposed method exhibits higher equivalence and segmentation performance.
arXiv Detail & Related papers (2023-09-29T00:04:47Z) - Improving robustness against common corruptions with frequency biased
models [112.65717928060195]
unseen image corruptions can cause a surprisingly large drop in performance.
Image corruption types have different characteristics in the frequency spectrum and would benefit from a targeted type of data augmentation.
We propose a new regularization scheme that minimizes the total variation (TV) of convolution feature-maps to increase high-frequency robustness.
arXiv Detail & Related papers (2021-03-30T10:44:50Z) - SIR: Self-supervised Image Rectification via Seeing the Same Scene from
Multiple Different Lenses [82.56853587380168]
We propose a novel self-supervised image rectification (SIR) method based on an important insight that the rectified results of distorted images of the same scene from different lens should be the same.
We leverage a differentiable warping module to generate the rectified images and re-distorted images from the distortion parameters.
Our method achieves comparable or even better performance than the supervised baseline method and representative state-of-the-art methods.
arXiv Detail & Related papers (2020-11-30T08:23:25Z) - Wide-angle Image Rectification: A Survey [86.36118799330802]
wide-angle images contain distortions that violate the assumptions underlying pinhole camera models.
Image rectification, which aims to correct these distortions, can solve these problems.
We present a detailed description and discussion of the camera models used in different approaches.
Next, we review both traditional geometry-based image rectification methods and deep learning-based methods.
arXiv Detail & Related papers (2020-10-30T17:28:40Z) - Encoding Robustness to Image Style via Adversarial Feature Perturbations [72.81911076841408]
We adapt adversarial training by directly perturbing feature statistics, rather than image pixels, to produce robust models.
Our proposed method, Adversarial Batch Normalization (AdvBN), is a single network layer that generates worst-case feature perturbations during training.
arXiv Detail & Related papers (2020-09-18T17:52:34Z) - Adversarial Semantic Data Augmentation for Human Pose Estimation [96.75411357541438]
We propose Semantic Data Augmentation (SDA), a method that augments images by pasting segmented body parts with various semantic granularity.
We also propose Adversarial Semantic Data Augmentation (ASDA), which exploits a generative network to dynamiclly predict tailored pasting configuration.
State-of-the-art results are achieved on challenging benchmarks.
arXiv Detail & Related papers (2020-08-03T07:56:04Z) - Model-based occlusion disentanglement for image-to-image translation [26.36897056828784]
Our unsupervised model-based learning disentangles scene and occlusions.
We generate highly realistic translations, qualitatively and quantitatively outperforming the state-of-the-art on multiple datasets.
arXiv Detail & Related papers (2020-04-02T15:24:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.