Unexplored Faces of Robustness and Out-of-Distribution: Covariate Shifts in Environment and Sensor Domains
- URL: http://arxiv.org/abs/2404.15882v2
- Date: Thu, 25 Apr 2024 05:38:52 GMT
- Title: Unexplored Faces of Robustness and Out-of-Distribution: Covariate Shifts in Environment and Sensor Domains
- Authors: Eunsu Baek, Keondo Park, Jiyoon Kim, Hyung-Sin Kim,
- Abstract summary: We introduce a new distribution shift dataset, ImageNet-ES.
We evaluate out-of-distribution (OOD) detection and model robustness.
Our results suggest that effective shift mitigation via camera sensor control can significantly improve performance without increasing model size.
- Score: 2.4572304328659595
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computer vision applications predict on digital images acquired by a camera from physical scenes through light. However, conventional robustness benchmarks rely on perturbations in digitized images, diverging from distribution shifts occurring in the image acquisition process. To bridge this gap, we introduce a new distribution shift dataset, ImageNet-ES, comprising variations in environmental and camera sensor factors by directly capturing 202k images with a real camera in a controllable testbed. With the new dataset, we evaluate out-of-distribution (OOD) detection and model robustness. We find that existing OOD detection methods do not cope with the covariate shifts in ImageNet-ES, implying that the definition and detection of OOD should be revisited to embrace real-world distribution shifts. We also observe that the model becomes more robust in both ImageNet-C and -ES by learning environment and sensor variations in addition to existing digital augmentations. Lastly, our results suggest that effective shift mitigation via camera sensor control can significantly improve performance without increasing model size. With these findings, our benchmark may aid future research on robustness, OOD, and camera sensor control for computer vision. Our code and dataset are available at https://github.com/Edw2n/ImageNet-ES.
Related papers
- Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models [68.90917438865078]
Deepfake techniques for facial synthesis and editing pose serious risks for generative models.
In this paper, we investigate how detection performance varies across model backbones, types, and datasets.
We introduce Contrastive Blur, which enhances performance on facial images, and MINDER, which addresses noise type bias, balancing performance across domains.
arXiv Detail & Related papers (2024-11-28T13:04:45Z) - MSSIDD: A Benchmark for Multi-Sensor Denoising [55.41612200877861]
We introduce a new benchmark, the Multi-Sensor SIDD dataset, which is the first raw-domain dataset designed to evaluate the sensor transferability of denoising models.
We propose a sensor consistency training framework that enables denoising models to learn the sensor-invariant features.
arXiv Detail & Related papers (2024-11-18T13:32:59Z) - Adaptive Domain Learning for Cross-domain Image Denoising [57.4030317607274]
We present a novel adaptive domain learning scheme for cross-domain image denoising.
We use existing data from different sensors (source domain) plus a small amount of data from the new sensor (target domain)
The ADL training scheme automatically removes the data in the source domain that are harmful to fine-tuning a model for the target domain.
Also, we introduce a modulation module to adopt sensor-specific information (sensor type and ISO) to understand input data for image denoising.
arXiv Detail & Related papers (2024-11-03T08:08:26Z) - Can Your Generative Model Detect Out-of-Distribution Covariate Shift? [2.0144831048903566]
We propose a novel method for detecting Out-of-Distribution (OOD) sensory data using conditional Normalizing Flows (cNFs)
Our results on CIFAR10 vs. CIFAR10-C and ImageNet200 vs. ImageNet200-C demonstrate the effectiveness of the method.
arXiv Detail & Related papers (2024-09-04T19:27:56Z) - Siamese Meets Diffusion Network: SMDNet for Enhanced Change Detection in
High-Resolution RS Imagery [7.767708235606408]
We propose a new network, Siamese-U2Net Feature Differential Meets Network (SMDNet)
This network combines the Siam-U2Net Feature Differential (SU-FDE) and the denoising diffusion implicit model to improve the accuracy of image edge change detection.
Our method's combination of feature extraction and diffusion models demonstrates effectiveness in change detection in remote sensing images.
arXiv Detail & Related papers (2024-01-17T16:48:55Z) - Raw Bayer Pattern Image Synthesis with Conditional GAN [0.0]
We propose a method to generate Bayer pattern images by Generative adversarial network (GANs)
The Bayer pattern images can be generated by configuring the transformation as demosaicing.
Experiments show that the images generated by our proposed method outperform the original Pix2PixHD model in FID score, PSNR, and SSIM.
arXiv Detail & Related papers (2021-10-25T11:40:36Z) - Self-supervised Multisensor Change Detection [14.191073951237772]
We revisit multisensor analysis in context of self-supervised change detection in bi-temporal satellite images.
Recent development of self-supervised learning methods has shown that some of them can even work with only few images.
Motivated by this, in this work we propose a method for multi-sensor change detection using only the unlabeled target bi-temporal images.
arXiv Detail & Related papers (2021-02-12T12:31:10Z) - Real-time detection of uncalibrated sensors using Neural Networks [62.997667081978825]
An online machine-learning based uncalibration detector for temperature, humidity and pressure sensors was developed.
The solution integrates an Artificial Neural Network as main component which learns from the behavior of the sensors under calibrated conditions.
The obtained results show that the proposed solution is able to detect uncalibrations for deviation values of 0.25 degrees, 1% RH and 1.5 Pa, respectively.
arXiv Detail & Related papers (2021-02-02T15:44:39Z) - Why Normalizing Flows Fail to Detect Out-of-Distribution Data [51.552870594221865]
Normalizing flows fail to distinguish between in- and out-of-distribution data.
We demonstrate that flows learn local pixel correlations and generic image-to-latent-space transformations.
We show that by modifying the architecture of flow coupling layers we can bias the flow towards learning the semantic structure of the target data.
arXiv Detail & Related papers (2020-06-15T17:00:01Z) - Learning Camera Miscalibration Detection [83.38916296044394]
This paper focuses on a data-driven approach to learn the detection of miscalibration in vision sensors, specifically RGB cameras.
Our contributions include a proposed miscalibration metric for RGB cameras and a novel semi-synthetic dataset generation pipeline based on this metric.
By training a deep convolutional neural network, we demonstrate the effectiveness of our pipeline to identify whether a recalibration of the camera's intrinsic parameters is required or not.
arXiv Detail & Related papers (2020-05-24T10:32:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.