RobustCLEVR: A Benchmark and Framework for Evaluating Robustness in
Object-centric Learning
- URL: http://arxiv.org/abs/2308.14899v1
- Date: Mon, 28 Aug 2023 20:52:18 GMT
- Title: RobustCLEVR: A Benchmark and Framework for Evaluating Robustness in
Object-centric Learning
- Authors: Nathan Drenkow, Mathias Unberath
- Abstract summary: We present the RobustCLEVR benchmark dataset and evaluation framework.
Our framework takes a novel approach to evaluating robustness by enabling the specification of causal dependencies.
Overall, we find that object-centric methods are not inherently robust to image corruptions.
- Score: 9.308581290987783
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Object-centric representation learning offers the potential to overcome
limitations of image-level representations by explicitly parsing image scenes
into their constituent components. While image-level representations typically
lack robustness to natural image corruptions, the robustness of object-centric
methods remains largely untested. To address this gap, we present the
RobustCLEVR benchmark dataset and evaluation framework. Our framework takes a
novel approach to evaluating robustness by enabling the specification of causal
dependencies in the image generation process grounded in expert knowledge and
capable of producing a wide range of image corruptions unattainable in existing
robustness evaluations. Using our framework, we define several causal models of
the image corruption process which explicitly encode assumptions about the
causal relationships and distributions of each corruption type. We generate
dataset variants for each causal model on which we evaluate state-of-the-art
object-centric methods. Overall, we find that object-centric methods are not
inherently robust to image corruptions. Our causal evaluation approach exposes
model sensitivities not observed using conventional evaluation processes,
yielding greater insight into robustness differences across algorithms. Lastly,
while conventional robustness evaluations view corruptions as
out-of-distribution, we use our causal framework to show that even training on
in-distribution image corruptions does not guarantee increased model
robustness. This work provides a step towards more concrete and substantiated
understanding of model performance and deterioration under complex corruption
processes of the real-world.
Related papers
- Leveraging generative models to characterize the failure conditions of image classifiers [5.018156030818883]
We exploit the capacity of producing controllable distributions of high quality image data made available by Generative Adversarial Networks (StyleGAN2)
The failure conditions are expressed as directions of strong performance degradation in the generative model latent space.
arXiv Detail & Related papers (2024-10-01T08:52:46Z) - PAIF: Perception-Aware Infrared-Visible Image Fusion for Attack-Tolerant
Semantic Segmentation [50.556961575275345]
We propose a perception-aware fusion framework to promote segmentation robustness in adversarial scenes.
We show that our scheme substantially enhances the robustness, with gains of 15.3% mIOU, compared with advanced competitors.
arXiv Detail & Related papers (2023-08-08T01:55:44Z) - Frequency-Based Vulnerability Analysis of Deep Learning Models against
Image Corruptions [48.34142457385199]
We present MUFIA, an algorithm designed to identify the specific types of corruptions that can cause models to fail.
We find that even state-of-the-art models trained to be robust against known common corruptions struggle against the low visibility-based corruptions crafted by MUFIA.
arXiv Detail & Related papers (2023-06-12T15:19:13Z) - A Survey on the Robustness of Computer Vision Models against Common Corruptions [3.6486148851646063]
Computer vision models are susceptible to changes in input images caused by sensor errors or extreme imaging environments.
These corruptions can significantly hinder the reliability of these models when deployed in real-world scenarios.
We present a comprehensive overview of methods that improve the robustness of computer vision models against common corruptions.
arXiv Detail & Related papers (2023-05-10T10:19:31Z) - A Comprehensive Study on Robustness of Image Classification Models:
Benchmarking and Rethinking [54.89987482509155]
robustness of deep neural networks is usually lacking under adversarial examples, common corruptions, and distribution shifts.
We establish a comprehensive benchmark robustness called textbfARES-Bench on the image classification task.
By designing the training settings accordingly, we achieve the new state-of-the-art adversarial robustness.
arXiv Detail & Related papers (2023-02-28T04:26:20Z) - Robustness and invariance properties of image classifiers [8.970032486260695]
Deep neural networks have achieved impressive results in many image classification tasks.
Deep networks are not robust to a large variety of semantic-preserving image modifications.
The poor robustness of image classifiers to small data distribution shifts raises serious concerns regarding their trustworthiness.
arXiv Detail & Related papers (2022-08-30T11:00:59Z) - Exploring Resolution and Degradation Clues as Self-supervised Signal for
Low Quality Object Detection [77.3530907443279]
We propose a novel self-supervised framework to detect objects in degraded low resolution images.
Our methods has achieved superior performance compared with existing methods when facing variant degradation situations.
arXiv Detail & Related papers (2022-08-05T09:36:13Z) - RestoreDet: Degradation Equivariant Representation for Object Detection
in Low Resolution Images [81.91416537019835]
We propose a novel framework, RestoreDet, to detect objects in degraded low resolution images.
Our framework based on CenterNet has achieved superior performance compared with existing methods when facing variant degradation situations.
arXiv Detail & Related papers (2022-01-07T03:40:23Z) - Robustness in Deep Learning for Computer Vision: Mind the gap? [13.576376492050185]
We identify, analyze, and summarize current definitions and progress towards non-adversarial robustness in deep learning for computer vision.
We find that this area of research has received disproportionately little attention relative to adversarial machine learning.
arXiv Detail & Related papers (2021-12-01T16:42:38Z) - Improving robustness against common corruptions with frequency biased
models [112.65717928060195]
unseen image corruptions can cause a surprisingly large drop in performance.
Image corruption types have different characteristics in the frequency spectrum and would benefit from a targeted type of data augmentation.
We propose a new regularization scheme that minimizes the total variation (TV) of convolution feature-maps to increase high-frequency robustness.
arXiv Detail & Related papers (2021-03-30T10:44:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.