Don't Judge an Object by Its Context: Learning to Overcome Contextual
Bias
- URL: http://arxiv.org/abs/2001.03152v2
- Date: Tue, 5 May 2020 23:20:53 GMT
- Title: Don't Judge an Object by Its Context: Learning to Overcome Contextual
Bias
- Authors: Krishna Kumar Singh, Dhruv Mahajan, Kristen Grauman, Yong Jae Lee,
Matt Feiszli, Deepti Ghadiyaram
- Abstract summary: Existing models often leverage co-occurrences between objects and their context to improve recognition accuracy.
This work focuses on addressing such contextual biases to improve the robustness of the learnt feature representations.
- Score: 113.44471186752018
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing models often leverage co-occurrences between objects and their
context to improve recognition accuracy. However, strongly relying on context
risks a model's generalizability, especially when typical co-occurrence
patterns are absent. This work focuses on addressing such contextual biases to
improve the robustness of the learnt feature representations. Our goal is to
accurately recognize a category in the absence of its context, without
compromising on performance when it co-occurs with context. Our key idea is to
decorrelate feature representations of a category from its co-occurring
context. We achieve this by learning a feature subspace that explicitly
represents categories occurring in the absence of context along side a joint
feature subspace that represents both categories and context. Our very simple
yet effective method is extensible to two multi-label tasks -- object and
attribute classification. On 4 challenging datasets, we demonstrate the
effectiveness of our method in reducing contextual bias.
Related papers
- AttrSeg: Open-Vocabulary Semantic Segmentation via Attribute
Decomposition-Aggregation [33.25304533086283]
Open-vocabulary semantic segmentation is a challenging task that requires segmenting novel object categories at inference time.
Recent studies have explored vision-language pre-training to handle this task, but suffer from unrealistic assumptions in practical scenarios.
This work proposes a novel attribute decomposition-aggregation framework, AttrSeg, inspired by human cognition in understanding new concepts.
arXiv Detail & Related papers (2023-08-31T19:34:09Z) - Learning Context-aware Classifier for Semantic Segmentation [88.88198210948426]
In this paper, contextual hints are exploited via learning a context-aware classifier.
Our method is model-agnostic and can be easily applied to generic segmentation models.
With only negligible additional parameters and +2% inference time, decent performance gain has been achieved on both small and large models.
arXiv Detail & Related papers (2023-03-21T07:00:35Z) - Self-Supervised Visual Representation Learning with Semantic Grouping [50.14703605659837]
We tackle the problem of learning visual representations from unlabeled scene-centric data.
We propose contrastive learning from data-driven semantic slots, namely SlotCon, for joint semantic grouping and representation learning.
arXiv Detail & Related papers (2022-05-30T17:50:59Z) - Context vs Target Word: Quantifying Biases in Lexical Semantic Datasets [18.754562380068815]
State-of-the-art contextualized models such as BERT use tasks such as WiC and WSD to evaluate their word-in-context representations.
This study presents the first quantitative analysis (using probing baselines) on the context-word interaction being tested in major contextual lexical semantic tasks.
arXiv Detail & Related papers (2021-12-13T15:37:05Z) - Context-LGM: Leveraging Object-Context Relation for Context-Aware Object
Recognition [48.5398871460388]
We propose a novel Contextual Latent Generative Model (Context-LGM), which considers the object-context relation and models it in a hierarchical manner.
To infer contextual features, we reformulate the objective function of Variational Auto-Encoder (VAE), where contextual features are learned as a posterior conditioned distribution on the object.
The effectiveness of our method is verified by state-of-the-art performance on two context-aware object recognition tasks.
arXiv Detail & Related papers (2021-10-08T11:31:58Z) - Wisdom of the Contexts: Active Ensemble Learning for Contextual Anomaly
Detection [7.87320844079302]
In contextual anomaly detection (CAD), an object is only considered anomalous within a specific context.
We propose a novel approach, called WisCon, that automatically creates contexts from the feature set.
Our method constructs an ensemble of multiple contexts, with varying importance scores, based on the assumption that not all useful contexts are equally so.
arXiv Detail & Related papers (2021-01-27T17:34:13Z) - Dynamic Semantic Matching and Aggregation Network for Few-shot Intent
Detection [69.2370349274216]
Few-shot Intent Detection is challenging due to the scarcity of available annotated utterances.
Semantic components are distilled from utterances via multi-head self-attention.
Our method provides a comprehensive matching measure to enhance representations of both labeled and unlabeled instances.
arXiv Detail & Related papers (2020-10-06T05:16:38Z) - Fast and Robust Unsupervised Contextual Biasing for Speech Recognition [16.557586847398778]
We propose an alternative approach that does not entail explicit contextual language model.
We derive the bias score for every word in the system vocabulary from the training corpus.
We show significant improvement in recognition accuracy when the relevant context is available.
arXiv Detail & Related papers (2020-05-04T17:29:59Z) - How Far are We from Effective Context Modeling? An Exploratory Study on
Semantic Parsing in Context [59.13515950353125]
We present a grammar-based decoding semantic parsing and adapt typical context modeling methods on top of it.
We evaluate 13 context modeling methods on two large cross-domain datasets, and our best model achieves state-of-the-art performances.
arXiv Detail & Related papers (2020-02-03T11:28:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.