Nuisance-Label Supervision: Robustness Improvement by Free Labels
- URL: http://arxiv.org/abs/2110.07118v1
- Date: Thu, 14 Oct 2021 02:07:00 GMT
- Title: Nuisance-Label Supervision: Robustness Improvement by Free Labels
- Authors: Xinyue Wei, Weichao Qiu, Yi Zhang, Zihao Xiao, Alan Yuille
- Abstract summary: We present a Nuisance-label Supervision (NLS) module, which can make models more robust to nuisance factor variations.
Experiments show consistent improvement in robustness towards image corruption and appearance change in action recognition.
- Score: 14.711384503643995
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a Nuisance-label Supervision (NLS) module, which
can make models more robust to nuisance factor variations. Nuisance factors are
those irrelevant to a task, and an ideal model should be invariant to them. For
example, an activity recognition model should perform consistently regardless
of the change of clothes and background. But our experiments show existing
models are far from this capability. So we explicitly supervise a model with
nuisance labels to make extracted features less dependent on nuisance factors.
Although the values of nuisance factors are rarely annotated, we demonstrate
that besides existing annotations, nuisance labels can be acquired freely from
data augmentation and synthetic data. Experiments show consistent improvement
in robustness towards image corruption and appearance change in action
recognition.
Related papers
- Regulating Model Reliance on Non-Robust Features by Smoothing Input Marginal Density [93.32594873253534]
Trustworthy machine learning requires meticulous regulation of model reliance on non-robust features.
We propose a framework to delineate and regulate such features by attributing model predictions to the input.
arXiv Detail & Related papers (2024-07-05T09:16:56Z) - Perceptual Quality-based Model Training under Annotator Label Uncertainty [15.015925663078377]
Annotators exhibit disagreement during data labeling, which can be termed as annotator label uncertainty.
We introduce a novel perceptual quality-based model training framework to objectively generate multiple labels for model training.
arXiv Detail & Related papers (2024-03-15T10:52:18Z) - Vision-language Assisted Attribute Learning [53.60196963381315]
Attribute labeling at large scale is typically incomplete and partial.
Existing attribute learning methods often treat the missing labels as negative or simply ignore them all during training.
We leverage the available vision-language knowledge to explicitly disclose the missing labels for enhancing model learning.
arXiv Detail & Related papers (2023-12-12T06:45:19Z) - Partial Label Supervision for Agnostic Generative Noisy Label Learning [18.29334728940232]
Noisy label learning has been tackled with both discriminative and generative approaches.
We propose a novel framework for generative noisy label learning that addresses these challenges.
arXiv Detail & Related papers (2023-08-02T14:48:25Z) - Label-Retrieval-Augmented Diffusion Models for Learning from Noisy
Labels [61.97359362447732]
Learning from noisy labels is an important and long-standing problem in machine learning for real applications.
In this paper, we reformulate the label-noise problem from a generative-model perspective.
Our model achieves new state-of-the-art (SOTA) results on all the standard real-world benchmark datasets.
arXiv Detail & Related papers (2023-05-31T03:01:36Z) - Nuisances via Negativa: Adjusting for Spurious Correlations via Data Augmentation [32.66196135141696]
Features with varying relationships to the label are nuisances.
Models that exploit nuisance-label relationships face performance degradation when these relationships change.
We develop an approach to use knowledge about the semantics by corrupting them in data.
arXiv Detail & Related papers (2022-10-04T01:40:31Z) - Causal Transportability for Visual Recognition [70.13627281087325]
We show that standard classifiers fail because the association between images and labels is not transportable across settings.
We then show that the causal effect, which severs all sources of confounding, remains invariant across domains.
This motivates us to develop an algorithm to estimate the causal effect for image classification.
arXiv Detail & Related papers (2022-04-26T15:02:11Z) - Generative Modeling Helps Weak Supervision (and Vice Versa) [87.62271390571837]
We propose a model fusing weak supervision and generative adversarial networks.
It captures discrete variables in the data alongside the weak supervision derived label estimate.
It is the first approach to enable data augmentation through weakly supervised synthetic images and pseudolabels.
arXiv Detail & Related papers (2022-03-22T20:24:21Z) - Tracking the risk of a deployed model and detecting harmful distribution
shifts [105.27463615756733]
In practice, it may make sense to ignore benign shifts, under which the performance of a deployed model does not degrade substantially.
We argue that a sensible method for firing off a warning has to both (a) detect harmful shifts while ignoring benign ones, and (b) allow continuous monitoring of model performance without increasing the false alarm rate.
arXiv Detail & Related papers (2021-10-12T17:21:41Z) - Towards Robust Classification Model by Counterfactual and Invariant Data
Generation [7.488317734152585]
Spuriousness occurs when some features correlate with labels but are not causal.
We propose two data generation processes to reduce spuriousness.
Our data generations outperform state-of-the-art methods in accuracy when spurious correlations break.
arXiv Detail & Related papers (2021-06-02T12:48:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.