ASIF: Coupled Data Turns Unimodal Models to Multimodal Without Training
- URL: http://arxiv.org/abs/2210.01738v3
- Date: Fri, 10 Nov 2023 10:44:44 GMT
- Title: ASIF: Coupled Data Turns Unimodal Models to Multimodal Without Training
- Authors: Antonio Norelli, Marco Fumero, Valentino Maiorca, Luca Moschella,
Emanuele Rodol\`a, Francesco Locatello
- Abstract summary: We show that a common space can be created without any training at all, using single-domain encoders and a much smaller amount of image-text pairs.
Our model has unique properties, most notably, deploying a new version with updated training samples can be done in a matter of seconds.
- Score: 29.240131406803794
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: CLIP proved that aligning visual and language spaces is key to solving many
vision tasks without explicit training, but required to train image and text
encoders from scratch on a huge dataset. LiT improved this by only training the
text encoder and using a pre-trained vision network. In this paper, we show
that a common space can be created without any training at all, using
single-domain encoders (trained with or without supervision) and a much smaller
amount of image-text pairs. Furthermore, our model has unique properties. Most
notably, deploying a new version with updated training samples can be done in a
matter of seconds. Additionally, the representations in the common space are
easily interpretable as every dimension corresponds to the similarity of the
input to a unique image-text pair in the multimodal dataset. Experiments on
standard zero-shot visual benchmarks demonstrate the typical transfer ability
of image-text models. Overall, our method represents a simple yet surprisingly
strong baseline for foundation multimodal models, raising important questions
on their data efficiency and on the role of retrieval in machine learning.
Related papers
- VISTA: Visualized Text Embedding For Universal Multi-Modal Retrieval [10.603148564713518]
We present a new embedding model VISTA for universal multi-modal retrieval.
We introduce a flexible architecture which extends a powerful text encoder with the image understanding capability.
Secondly, we develop two data generation strategies, which bring high-quality composed image-text to facilitate the training of the embedding model.
arXiv Detail & Related papers (2024-06-06T17:37:47Z) - EVE: Efficient Vision-Language Pre-training with Masked Prediction and
Modality-Aware MoE [66.48689706116808]
Efficient Vision-languagE is one unified multimodal Transformer pre-trained solely by one unified pre-training task.
Eve encodes both vision and language within a shared Transformer network integrated with modality-aware sparse Mixture-of-Experts.
Eve achieves state-of-the-art performance on various vision-language downstream tasks, including visual question answering, visual reasoning, and image-text retrieval.
arXiv Detail & Related papers (2023-08-23T07:36:30Z) - MoMo: A shared encoder Model for text, image and multi-Modal
representations [4.812718493682455]
We propose a self-supervised shared encoder model that achieves strong results on several visual, language and multimodal benchmarks.
We use a single transformer with all the encoder layers processing both the text and the image modalities.
arXiv Detail & Related papers (2023-04-11T22:26:10Z) - Language Quantized AutoEncoders: Towards Unsupervised Text-Image
Alignment [81.73717488887938]
Language-Quantized AutoEncoder (LQAE) learns to align text-image data in an unsupervised manner by leveraging pretrained language models.
LQAE learns to represent similar images with similar clusters of text tokens, thereby aligning these two modalities without the use of aligned text-image pairs.
This enables few-shot image classification with large language models (e.g., GPT-3) as well as linear classification of images based on BERT text features.
arXiv Detail & Related papers (2023-02-02T06:38:44Z) - Vision Learners Meet Web Image-Text Pairs [32.36188289972377]
In this work, we consider self-supervised pre-training on noisy web sourced image-text paired data.
We compare a range of methods, including single-modal ones that use masked training objectives and multi-modal ones that use image-text constrastive training.
We present a new visual representation pre-training method, MUlti-modal Generator(MUG), that learns from scalable web sourced image-text data.
arXiv Detail & Related papers (2023-01-17T18:53:24Z) - Multimodal Masked Autoencoders Learn Transferable Representations [127.35955819874063]
We propose a simple and scalable network architecture, the Multimodal Masked Autoencoder (M3AE)
M3AE learns a unified encoder for both vision and language data via masked token prediction.
We provide an empirical study of M3AE trained on a large-scale image-text dataset, and find that M3AE is able to learn generalizable representations that transfer well to downstream tasks.
arXiv Detail & Related papers (2022-05-27T19:09:42Z) - Multimodal Knowledge Alignment with Reinforcement Learning [103.68816413817372]
ESPER extends language-only zero-shot models to unseen multimodal tasks, like image and audio captioning.
Our key novelty is to use reinforcement learning to align multimodal inputs to language model generations without direct supervision.
Experiments demonstrate that ESPER outperforms baselines and prior work on a variety of zero-shot tasks.
arXiv Detail & Related papers (2022-05-25T10:12:17Z) - Multimodal Semi-Supervised Learning for Text Recognition [10.33262222726707]
We present semi-supervised learning for multimodal text recognizers (SemiMTR) that leverages unlabeled data at each modality training phase.
Our algorithm starts by pretraining the vision model through a single-stage training that unifies self-supervised learning with supervised training.
In a novel setup, consistency is enforced on each modality separately.
arXiv Detail & Related papers (2022-05-08T13:55:30Z) - Unsupervised Vision-and-Language Pre-training via Retrieval-based
Multi-Granular Alignment [66.77841319057299]
We propose a novel unsupervised Vision-and-Language pre-training curriculum for non-parallel texts and images.
We first construct a weakly aligned image-text corpus via a retrieval-based approach, then apply a set of multi-granular alignment pre-training tasks.
A comprehensive ablation study shows each granularity is helpful to learn a stronger pre-trained model.
arXiv Detail & Related papers (2022-03-01T05:34:01Z) - Scaling Up Visual and Vision-Language Representation Learning With Noisy
Text Supervision [57.031588264841]
We leverage a noisy dataset of over one billion image alt-text pairs, obtained without expensive filtering or post-processing steps.
A simple dual-encoder architecture learns to align visual and language representations of the image and text pairs using a contrastive loss.
We show that the scale of our corpus can make up for its noise and leads to state-of-the-art representations even with such a simple learning scheme.
arXiv Detail & Related papers (2021-02-11T10:08:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.