The in-context inductive biases of vision-language models differ across modalities
- URL: http://arxiv.org/abs/2502.01530v1
- Date: Mon, 03 Feb 2025 17:11:03 GMT
- Title: The in-context inductive biases of vision-language models differ across modalities
- Authors: Kelsey Allen, Ishita Dasgupta, Eliza Kosoy, Andrew K. Lampinen,
- Abstract summary: We study how generalizations vary by the modality in which stimuli are presented, and the way the stimuli are described in text.
We find that the models generally show some bias towards generalizing according to shape over color.
These results help to reveal how vision-language models represent different types of inputs in context.
- Score: 15.501577963067856
- License:
- Abstract: Inductive biases are what allow learners to make guesses in the absence of conclusive evidence. These biases have often been studied in cognitive science using concepts or categories -- e.g. by testing how humans generalize a new category from a few examples that leave the category boundary ambiguous. We use these approaches to study generalization in foundation models during in-context learning. Modern foundation models can condition on both vision and text, and differences in how they interpret and learn from these different modalities is an emerging area of study. Here, we study how their generalizations vary by the modality in which stimuli are presented, and the way the stimuli are described in text. We study these biases with three different experimental paradigms, across three different vision-language models. We find that the models generally show some bias towards generalizing according to shape over color. This shape bias tends to be amplified when the examples are presented visually. By contrast, when examples are presented in text, the ordering of adjectives affects generalization. However, the extent of these effects vary across models and paradigms. These results help to reveal how vision-language models represent different types of inputs in context, and may have practical implications for the use of vision-language models.
Related papers
- Gender Bias in Instruction-Guided Speech Synthesis Models [55.2480439325792]
This study investigates the potential gender bias in how models interpret occupation-related prompts.
We explore whether these models exhibit tendencies to amplify gender stereotypes when interpreting such prompts.
Our experimental results reveal the model's tendency to exhibit gender bias for certain occupations.
arXiv Detail & Related papers (2025-02-08T17:38:24Z) - Biased or Flawed? Mitigating Stereotypes in Generative Language Models by Addressing Task-Specific Flaws [12.559028963968247]
generative language models often reflect and amplify societal biases in their outputs.
We propose a targeted stereotype mitigation framework that implicitly mitigates observed stereotypes in generative models.
We reduce stereotypical outputs by over 60% across multiple dimensions.
arXiv Detail & Related papers (2024-12-16T03:29:08Z) - Spoken Stereoset: On Evaluating Social Bias Toward Speaker in Speech Large Language Models [50.40276881893513]
This study introduces Spoken Stereoset, a dataset specifically designed to evaluate social biases in Speech Large Language Models (SLLMs)
By examining how different models respond to speech from diverse demographic groups, we aim to identify these biases.
The findings indicate that while most models show minimal bias, some still exhibit slightly stereotypical or anti-stereotypical tendencies.
arXiv Detail & Related papers (2024-08-14T16:55:06Z) - Exposing Bias in Online Communities through Large-Scale Language Models [3.04585143845864]
This work uses the flaw of bias in language models to explore the biases of six different online communities.
The bias of the resulting models is evaluated by prompting the models with different demographics and comparing the sentiment and toxicity values of these generations.
This work not only affirms how easily bias is absorbed from training data but also presents a scalable method to identify and compare the bias of different datasets or communities.
arXiv Detail & Related papers (2023-06-04T08:09:26Z) - How Do In-Context Examples Affect Compositional Generalization? [86.57079616209474]
In this paper, we present CoFe, a test suite to investigate in-context compositional generalization.
We find that the compositional generalization performance can be easily affected by the selection of in-context examples.
Our systematic experiments indicate that in-context examples should be structurally similar to the test case, diverse from each other, and individually simple.
arXiv Detail & Related papers (2023-05-08T16:32:18Z) - Localization vs. Semantics: Visual Representations in Unimodal and
Multimodal Models [57.08925810659545]
We conduct a comparative analysis of the visual representations in existing vision-and-language models and vision-only models.
Our empirical observations suggest that vision-and-language models are better at label prediction tasks.
We hope our study sheds light on the role of language in visual learning, and serves as an empirical guide for various pretrained models.
arXiv Detail & Related papers (2022-12-01T05:00:18Z) - Learnable Visual Words for Interpretable Image Recognition [70.85686267987744]
We propose the Learnable Visual Words (LVW) to interpret the model prediction behaviors with two novel modules.
The semantic visual words learning relaxes the category-specific constraint, enabling the general visual words shared across different categories.
Our experiments on six visual benchmarks demonstrate the superior effectiveness of our proposed LVW in both accuracy and model interpretation.
arXiv Detail & Related papers (2022-05-22T03:24:45Z) - Interpreting Language Models with Contrastive Explanations [99.7035899290924]
Language models must consider various features to predict a token, such as its part of speech, number, tense, or semantics.
Existing explanation methods conflate evidence for all these features into a single explanation, which is less interpretable for human understanding.
We show that contrastive explanations are quantifiably better than non-contrastive explanations in verifying major grammatical phenomena.
arXiv Detail & Related papers (2022-02-21T18:32:24Z) - Worst of Both Worlds: Biases Compound in Pre-trained Vision-and-Language
Models [17.90351661475405]
This work extends text-based bias analysis methods to investigate multimodal language models.
We demonstrate that VL-BERT exhibits gender biases, often preferring to reinforce a stereotype over faithfully describing the visual scene.
arXiv Detail & Related papers (2021-04-18T00:02:32Z) - Sentiment Analysis with Contextual Embeddings and Self-Attention [3.0079490585515343]
In natural language the intended meaning of a word or phrase is often implicit and depends on the context.
We propose a simple yet effective method for sentiment analysis using contextual embeddings and a self-attention mechanism.
The experimental results for three languages, including morphologically rich Polish and German, show that our model is comparable to or even outperforms state-of-the-art models.
arXiv Detail & Related papers (2020-03-12T02:19:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.