Explaining CLIP through Co-Creative Drawings and Interaction
- URL: http://arxiv.org/abs/2306.07429v1
- Date: Mon, 12 Jun 2023 21:15:25 GMT
- Title: Explaining CLIP through Co-Creative Drawings and Interaction
- Authors: Varvara Guljajeva and Mar Canet Sol\`a and Isaac Joseph Clarke
- Abstract summary: This paper analyses a visual archive of drawings produced by an interactive robotic art installation where audience members narrated their dreams into a system powered by CLIPdraw deep learning (DL) model.
The resulting archive of prompt-image pairs were examined and clustered based on concept representation accuracy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper analyses a visual archive of drawings produced by an interactive
robotic art installation where audience members narrated their dreams into a
system powered by CLIPdraw deep learning (DL) model that interpreted and
transformed their dreams into images. The resulting archive of prompt-image
pairs were examined and clustered based on concept representation accuracy. As
a result of the analysis, the paper proposes four groupings for describing and
explaining CLIP-generated results: clear concept, text-to-text as image,
indeterminacy and confusion, and lost in translation. This article offers a
glimpse into a collection of dreams interpreted, mediated and given form by
Artificial Intelligence (AI), showcasing oftentimes unexpected, visually
compelling or, indeed, the dream-like output of the system, with the emphasis
on processes and results of translations between languages, sign-systems and
various modules of the installation. In the end, the paper argues that proposed
clusters support better understanding of the neural model.
Related papers
- Language Model as Visual Explainer [72.88137795439407]
We present a systematic approach for interpreting vision models using a tree-structured linguistic explanation.
Our method provides human-understandable explanations in the form of attribute-laden trees.
To access the effectiveness of our approach, we introduce new benchmarks and conduct rigorous evaluations.
arXiv Detail & Related papers (2024-12-08T20:46:23Z) - Towards Interpreting Visual Information Processing in Vision-Language Models [24.51408101801313]
Vision-Language Models (VLMs) are powerful tools for processing and understanding text and images.
We study the processing of visual tokens in the language model component of LLaVA, a prominent VLM.
arXiv Detail & Related papers (2024-10-09T17:55:02Z) - Compositional Entailment Learning for Hyperbolic Vision-Language Models [54.41927525264365]
We show how to fully leverage the innate hierarchical nature of hyperbolic embeddings by looking beyond individual image-text pairs.
We propose Compositional Entailment Learning for hyperbolic vision-language models.
Empirical evaluation on a hyperbolic vision-language model trained with millions of image-text pairs shows that the proposed compositional learning approach outperforms conventional Euclidean CLIP learning.
arXiv Detail & Related papers (2024-10-09T14:12:50Z) - What Makes a Maze Look Like a Maze? [92.80800000328277]
We introduce Deep Grounding (DSG), a framework that leverages explicit structured representations of visual abstractions for grounding and reasoning.
At the core of DSG are schemas--dependency graph descriptions of abstract concepts that decompose them into more primitive-level symbols.
We show that DSG significantly improves the abstract visual reasoning performance of vision-language models.
arXiv Detail & Related papers (2024-09-12T16:41:47Z) - Explaining Multi-modal Large Language Models by Analyzing their Vision Perception [4.597864989500202]
This research proposes a novel approach to enhance the interpretability of MLLMs by focusing on the image embedding component.
We combine an open-world localization model with a MLLM, thus creating a new architecture able to simultaneously produce text and object localization outputs from the same vision embedding.
arXiv Detail & Related papers (2024-05-23T14:24:23Z) - Hierarchical Text-to-Vision Self Supervised Alignment for Improved Histopathology Representation Learning [64.1316997189396]
We present a novel language-tied self-supervised learning framework, Hierarchical Language-tied Self-Supervision (HLSS) for histopathology images.
Our resulting model achieves state-of-the-art performance on two medical imaging benchmarks, OpenSRH and TCGA datasets.
arXiv Detail & Related papers (2024-03-21T17:58:56Z) - Foundational Models Defining a New Era in Vision: A Survey and Outlook [151.49434496615427]
Vision systems to see and reason about the compositional nature of visual scenes are fundamental to understanding our world.
The models learned to bridge the gap between such modalities coupled with large-scale training data facilitate contextual reasoning, generalization, and prompt capabilities at test time.
The output of such models can be modified through human-provided prompts without retraining, e.g., segmenting a particular object by providing a bounding box, having interactive dialogues by asking questions about an image or video scene or manipulating the robot's behavior through language instructions.
arXiv Detail & Related papers (2023-07-25T17:59:18Z) - Visually-Situated Natural Language Understanding with Contrastive
Reading Model and Frozen Large Language Models [24.456117679941816]
Contrastive Reading Model (Cream) is a novel neural architecture designed to enhance the language-image understanding capability of Large Language Models (LLMs)
Our approach bridges the gap between vision and language understanding, paving the way for the development of more sophisticated Document Intelligence Assistants.
arXiv Detail & Related papers (2023-05-24T11:59:13Z) - Perceptual Grouping in Contrastive Vision-Language Models [59.1542019031645]
We show how vision-language models are able to understand where objects reside within an image and group together visually related parts of the imagery.
We propose a minimal set of modifications that results in models that uniquely learn both semantic and spatial information.
arXiv Detail & Related papers (2022-10-18T17:01:35Z) - Compositional Mixture Representations for Vision and Text [43.2292923754127]
A common representation space between vision and language allows deep networks to relate objects in the image to the corresponding semantic meaning.
We present a model that learns a shared Gaussian mixture representation imposing the compositionality of the text onto the visual domain without having explicit location supervision.
arXiv Detail & Related papers (2022-06-13T18:16:40Z) - Visual Probing: Cognitive Framework for Explaining Self-Supervised Image
Representations [12.485001250777248]
Recently introduced self-supervised methods for image representation learning provide on par or superior results to their fully supervised competitors.
Motivated by this observation, we introduce a novel visual probing framework for explaining the self-supervised models.
We show the effectiveness and applicability of those analogs in the context of explaining self-supervised representations.
arXiv Detail & Related papers (2021-06-21T12:40:31Z) - Object Relational Graph with Teacher-Recommended Learning for Video
Captioning [92.48299156867664]
We propose a complete video captioning system including both a novel model and an effective training strategy.
Specifically, we propose an object relational graph (ORG) based encoder, which captures more detailed interaction features to enrich visual representation.
Meanwhile, we design a teacher-recommended learning (TRL) method to make full use of the successful external language model (ELM) to integrate the abundant linguistic knowledge into the caption model.
arXiv Detail & Related papers (2020-02-26T15:34:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.