Evaluating language-biased image classification based on semantic
representations
- URL: http://arxiv.org/abs/2201.11014v1
- Date: Wed, 26 Jan 2022 15:46:36 GMT
- Title: Evaluating language-biased image classification based on semantic
representations
- Authors: Yoann Lemesle, Masataka Sawayama, Guillermo Valle-Perez, Maxime
Adolphe, H\'el\`ene Sauz\'eon, Pierre-Yves Oudeyer
- Abstract summary: Humans show language-biased image recognition for a word-embedded image, known as picture-word interference.
Similar to humans, recent artificial models jointly trained on texts and images, e.g., OpenAI CLIP, show language-biased image classification.
- Score: 13.508894957080777
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Humans show language-biased image recognition for a word-embedded image,
known as picture-word interference. Such interference depends on hierarchical
semantic categories and reflects that human language processing highly
interacts with visual processing. Similar to humans, recent artificial models
jointly trained on texts and images, e.g., OpenAI CLIP, show language-biased
image classification. Exploring whether the bias leads to interferences similar
to those observed in humans can contribute to understanding how much the model
acquires hierarchical semantic representations from joint learning of language
and vision. The present study introduces methodological tools from the
cognitive science literature to assess the biases of artificial models.
Specifically, we introduce a benchmark task to test whether words superimposed
on images can distort the image classification across different category levels
and, if it can, whether the perturbation is due to the shared semantic
representation between language and vision. Our dataset is a set of
word-embedded images and consists of a mixture of natural image datasets and
hierarchical word labels with superordinate/basic category levels. Using this
benchmark test, we evaluate the CLIP model. We show that presenting words
distorts the image classification by the model across different category
levels, but the effect does not depend on the semantic relationship between
images and embedded words. This suggests that the semantic word representation
in the CLIP visual processing is not shared with the image representation,
although the word representation strongly dominates for word-embedded images.
Related papers
- Learning Object Semantic Similarity with Self-Supervision [7.473473243713322]
Humans judge the similarity of two objects based on their semantic relatedness.
It remains unclear how humans learn about semantic relationships between objects and categories.
arXiv Detail & Related papers (2024-04-19T14:08:17Z) - Vocabulary-free Image Classification and Semantic Segmentation [71.78089106671581]
We introduce the Vocabulary-free Image Classification (VIC) task, which aims to assign a class from an un-constrained language-induced semantic space to an input image without needing a known vocabulary.
VIC is challenging due to the vastness of the semantic space, which contains millions of concepts, including fine-grained categories.
We propose Category Search from External Databases (CaSED), a training-free method that leverages a pre-trained vision-language model and an external database.
arXiv Detail & Related papers (2024-04-16T19:27:21Z) - Towards Image Semantics and Syntax Sequence Learning [8.033697392628424]
We introduce the concept of "image grammar", consisting of "image semantics" and "image syntax"
We propose a weakly supervised two-stage approach to learn the image grammar relative to a class of visual objects/scenes.
Our framework is trained to reason over patch semantics and detect faulty syntax.
arXiv Detail & Related papers (2024-01-31T00:16:02Z) - Vocabulary-free Image Classification [75.38039557783414]
We formalize a novel task, termed as Vocabulary-free Image Classification (VIC)
VIC aims to assign to an input image a class that resides in an unconstrained language-induced semantic space, without the prerequisite of a known vocabulary.
CaSED is a method that exploits a pre-trained vision-language model and an external vision-language database to address VIC in a training-free manner.
arXiv Detail & Related papers (2023-06-01T17:19:43Z) - Perceptual Grouping in Contrastive Vision-Language Models [59.1542019031645]
We show how vision-language models are able to understand where objects reside within an image and group together visually related parts of the imagery.
We propose a minimal set of modifications that results in models that uniquely learn both semantic and spatial information.
arXiv Detail & Related papers (2022-10-18T17:01:35Z) - Zero-Shot Audio Classification using Image Embeddings [16.115449653258356]
We introduce image embeddings as side information on zero-shot audio classification by using a nonlinear acoustic-semantic projection.
We demonstrate that the image embeddings can be used as semantic information to perform zero-shot audio classification.
arXiv Detail & Related papers (2022-06-10T10:36:56Z) - HIRL: A General Framework for Hierarchical Image Representation Learning [54.12773508883117]
We propose a general framework for Hierarchical Image Representation Learning (HIRL)
This framework aims to learn multiple semantic representations for each image, and these representations are structured to encode image semantics from fine-grained to coarse-grained.
Based on a probabilistic factorization, HIRL learns the most fine-grained semantics by an off-the-shelf image SSL approach and learns multiple coarse-grained semantics by a novel semantic path discrimination scheme.
arXiv Detail & Related papers (2022-05-26T05:13:26Z) - Building a visual semantics aware object hierarchy [0.0]
We propose a novel unsupervised method to build visual semantics aware object hierarchy.
Our intuition in this paper comes from real-world knowledge representation where concepts are hierarchically organized.
The evaluation consists of two parts, firstly we apply the constructed hierarchy on the object recognition task and then we compare our visual hierarchy and existing lexical hierarchies to show the validity of our method.
arXiv Detail & Related papers (2022-02-26T00:10:21Z) - Consensus Graph Representation Learning for Better Grounded Image
Captioning [48.208119537050166]
We propose the Consensus Rraph Representation Learning framework (CGRL) for grounded image captioning.
We validate the effectiveness of our model, with a significant decline in object hallucination (-9% CHAIRi) on the Flickr30k Entities dataset.
arXiv Detail & Related papers (2021-12-02T04:17:01Z) - Hierarchical Image Classification using Entailment Cone Embeddings [68.82490011036263]
We first inject label-hierarchy knowledge into an arbitrary CNN-based classifier.
We empirically show that availability of such external semantic information in conjunction with the visual semantics from images boosts overall performance.
arXiv Detail & Related papers (2020-04-02T10:22:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.