3VL: using Trees to teach Vision & Language models compositional
concepts
- URL: http://arxiv.org/abs/2312.17345v1
- Date: Thu, 28 Dec 2023 20:26:03 GMT
- Title: 3VL: using Trees to teach Vision & Language models compositional
concepts
- Authors: Nir Yellinek, Leonid Karlinsky and Raja Giryes
- Abstract summary: We introduce the Tree-augmented Vision-Language (3VL) model architecture and training technique.
We show how Anchor, a simple technique for text unification, can be employed to filter nuisance factors.
We also exhibit how DiRe, which performs a differential relevancy comparison between VLM maps, enables us to generate compelling visualizations of a model's success or failure.
- Score: 45.718319397947056
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Vision-Language models (VLMs) have proved effective at aligning image and
text representations, producing superior zero-shot results when transferred to
many downstream tasks. However, these representations suffer some key
shortcomings in Compositional Language Concepts (CLC) understanding such as
recognizing objects' attributes, states, and relations between different
objects. Moreover, VLMs typically have poor interpretability, making it
challenging to debug and mitigate compositional-understanding failures. In this
work, we introduce the Tree-augmented Vision-Language (3VL) model architecture
and training technique accompanied by our proposed Anchor inference method and
Differential Relevance (DiRe) interpretability tool. By expanding the text of
an arbitrary image-text pair into a hierarchical tree structure using language
analysis tools, 3VL allows inducing this structure into the visual
representation learned by the model, enhancing its interpretability and
compositional reasoning. Additionally, we show how Anchor, a simple technique
for text unification, can be employed to filter nuisance factors while
increasing CLC understanding performance, e.g., on the fundamental VL-Checklist
benchmark. We also exhibit how DiRe, which performs a differential comparison
between VLM relevancy maps, enables us to generate compelling visualizations of
the reasons for a model's success or failure.
Related papers
- ComAlign: Compositional Alignment in Vision-Language Models [2.3250871476216814]
We introduce Compositional Alignment (ComAlign) to discover more exact correspondence of text and image components.
Our methodology emphasizes that the compositional structure extracted from the text modality must also be retained in the image modality.
We train a lightweight network lying on top of existing visual and language encoders using a small dataset.
arXiv Detail & Related papers (2024-09-12T16:46:41Z) - In-Context Learning Improves Compositional Understanding of Vision-Language Models [2.762909189433944]
compositional image understanding remains a rather difficult task due to the object bias present in training data.
We compare contrastive models with generative ones and analyze their differences in architecture, pre-training data, and training tasks and losses.
Our proposed approach outperforms baseline models across multiple compositional understanding datasets.
arXiv Detail & Related papers (2024-07-22T09:03:29Z) - Weak-to-Strong Compositional Learning from Generative Models for Language-based Object Detection [19.610781457283966]
We introduce a novel method for enhancing the compositional understanding of vision-language (VL) models in language-based object detection.
Our framework generates densely paired positive and negative triplets (image, text descriptions, and bounding boxes) in both image and text domains.
We propose a new compositional contrastive learning formulation that discovers semantics and structures in complex descriptions from synthetic triplets.
arXiv Detail & Related papers (2024-07-21T23:43:24Z) - Emergent Visual-Semantic Hierarchies in Image-Text Representations [13.300199242824934]
We study the knowledge of existing foundation models, finding that they exhibit emergent understanding of visual-semantic hierarchies.
We propose the Radial Embedding (RE) framework for probing and optimizing hierarchical understanding.
arXiv Detail & Related papers (2024-07-11T14:09:42Z) - Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language Models [73.40350756742231]
Visually-conditioned language models (VLMs) have seen growing adoption in applications such as visual dialogue, scene understanding, and robotic task planning.
Despite the volume of new releases, key design decisions around image preprocessing, architecture, and optimization are under-explored.
arXiv Detail & Related papers (2024-02-12T18:21:14Z) - Text encoders bottleneck compositionality in contrastive vision-language
models [76.2406963762722]
We train text-only recovery probes that aim to reconstruct captions from single-vector text representations.
We find that CLIP's text encoder falls short on more compositional inputs.
Results suggest text-only recoverability is a necessary (but not sufficient) condition for modeling compositional factors.
arXiv Detail & Related papers (2023-05-24T08:48:44Z) - Coarse-to-Fine Contrastive Learning in Image-Text-Graph Space for
Improved Vision-Language Compositionality [50.48859793121308]
Contrastively trained vision-language models have achieved remarkable progress in vision and language representation learning.
Recent research has highlighted severe limitations in their ability to perform compositional reasoning over objects, attributes, and relations.
arXiv Detail & Related papers (2023-05-23T08:28:38Z) - Perceptual Grouping in Contrastive Vision-Language Models [59.1542019031645]
We show how vision-language models are able to understand where objects reside within an image and group together visually related parts of the imagery.
We propose a minimal set of modifications that results in models that uniquely learn both semantic and spatial information.
arXiv Detail & Related papers (2022-10-18T17:01:35Z) - Unsupervised Vision-Language Parsing: Seamlessly Bridging Visual Scene
Graphs with Language Structures via Dependency Relationships [17.930724926012264]
We introduce a new task that targets on inducing a joint vision-language structure in an unsupervised manner.
Our goal is to bridge the visual scene graphs and linguistic dependency trees seamlessly.
We propose an automatic alignment procedure to produce coarse structures followed by human refinement to produce high-quality ones.
arXiv Detail & Related papers (2022-03-27T09:51:34Z) - Object Relational Graph with Teacher-Recommended Learning for Video
Captioning [92.48299156867664]
We propose a complete video captioning system including both a novel model and an effective training strategy.
Specifically, we propose an object relational graph (ORG) based encoder, which captures more detailed interaction features to enrich visual representation.
Meanwhile, we design a teacher-recommended learning (TRL) method to make full use of the successful external language model (ELM) to integrate the abundant linguistic knowledge into the caption model.
arXiv Detail & Related papers (2020-02-26T15:34:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.