On Guiding Visual Attention with Language Specification
- URL: http://arxiv.org/abs/2202.08926v1
- Date: Thu, 17 Feb 2022 22:40:19 GMT
- Title: On Guiding Visual Attention with Language Specification
- Authors: Suzanne Petryk, Lisa Dunlap, Keyan Nasseri, Joseph Gonzalez, Trevor
Darrell, and Anna Rohrbach
- Abstract summary: We use high-level language specification as advice for constraining the classification evidence to task-relevant features, instead of distractors.
We show that supervising spatial attention in this way improves performance on classification tasks with biased and noisy data.
- Score: 76.08326100891571
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While real world challenges typically define visual categories with language
words or phrases, most visual classification methods define categories with
numerical indices. However, the language specification of the classes provides
an especially useful prior for biased and noisy datasets, where it can help
disambiguate what features are task-relevant. Recently, large-scale multimodal
models have been shown to recognize a wide variety of high-level concepts from
a language specification even without additional image training data, but they
are often unable to distinguish classes for more fine-grained tasks. CNNs, in
contrast, can extract subtle image features that are required for fine-grained
discrimination, but will overfit to any bias or noise in datasets. Our insight
is to use high-level language specification as advice for constraining the
classification evidence to task-relevant features, instead of distractors. To
do this, we ground task-relevant words or phrases with attention maps from a
pretrained large-scale model. We then use this grounding to supervise a
classifier's spatial attention away from distracting context. We show that
supervising spatial attention in this way improves performance on
classification tasks with biased and noisy data, including about 3-15%
worst-group accuracy improvements and 41-45% relative improvements on fairness
metrics.
Related papers
- Enhancing Visual Classification using Comparative Descriptors [13.094102298155736]
We introduce a novel concept of comparative descriptors.
These descriptors emphasize the unique features of a target class against its most similar classes, enhancing differentiation.
An additional filtering process ensures that these descriptors are closer to the image embeddings in the CLIP space.
arXiv Detail & Related papers (2024-11-08T06:28:02Z) - Evolving Interpretable Visual Classifiers with Large Language Models [34.4903887876357]
Multimodal pre-trained models, such as CLIP, are popular for zero-shot classification due to their open-vocabulary flexibility and high performance.
vision-language models, which compute similarity scores between images and class labels, are largely black-box, with limited interpretability, risk for bias, and inability to discover new visual concepts not written down.
We present a novel method that discovers interpretable yet discriminative sets of attributes for visual recognition.
arXiv Detail & Related papers (2024-04-15T17:09:53Z) - Language Models for Text Classification: Is In-Context Learning Enough? [54.869097980761595]
Recent foundational language models have shown state-of-the-art performance in many NLP tasks in zero- and few-shot settings.
An advantage of these models over more standard approaches is the ability to understand instructions written in natural language (prompts)
This makes them suitable for addressing text classification problems for domains with limited amounts of annotated instances.
arXiv Detail & Related papers (2024-03-26T12:47:39Z) - Incremental Image Labeling via Iterative Refinement [4.7590051176368915]
In particular, the existence of the semantic gap problem leads to a many-to-many mapping between the information extracted from an image and its linguistic description.
This unavoidable bias further leads to poor performance on current computer vision tasks.
We introduce a Knowledge Representation (KR)-based methodology to provide guidelines driving the labeling process.
arXiv Detail & Related papers (2023-04-18T13:37:22Z) - SgVA-CLIP: Semantic-guided Visual Adapting of Vision-Language Models for
Few-shot Image Classification [84.05253637260743]
We propose a new framework, named Semantic-guided Visual Adapting (SgVA), to extend vision-language pre-trained models.
SgVA produces discriminative task-specific visual features by comprehensively using a vision-specific contrastive loss, a cross-modal contrastive loss, and an implicit knowledge distillation.
State-of-the-art results on 13 datasets demonstrate that the adapted visual features can well complement the cross-modal features to improve few-shot image classification.
arXiv Detail & Related papers (2022-11-28T14:58:15Z) - CCPrefix: Counterfactual Contrastive Prefix-Tuning for Many-Class
Classification [57.62886091828512]
We propose a brand-new prefix-tuning method, Counterfactual Contrastive Prefix-tuning (CCPrefix) for many-class classification.
Basically, an instance-dependent soft prefix, derived from fact-counterfactual pairs in the label space, is leveraged to complement the language verbalizers in many-class classification.
arXiv Detail & Related papers (2022-11-11T03:45:59Z) - Text2Model: Text-based Model Induction for Zero-shot Image Classification [38.704831945753284]
We address the challenge of building task-agnostic classifiers using only text descriptions.
We generate zero-shot classifiers using a hypernetwork that receives class descriptions and outputs a multi-class model.
We evaluate this approach in a series of zero-shot classification tasks, for image, point-cloud, and action recognition, using a range of text descriptions.
arXiv Detail & Related papers (2022-10-27T05:19:55Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - Leveraging Adversarial Training in Self-Learning for Cross-Lingual Text
Classification [52.69730591919885]
We present a semi-supervised adversarial training process that minimizes the maximal loss for label-preserving input perturbations.
We observe significant gains in effectiveness on document and intent classification for a diverse set of languages.
arXiv Detail & Related papers (2020-07-29T19:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.