Composed Image Retrieval using Contrastive Learning and Task-oriented
CLIP-based Features
- URL: http://arxiv.org/abs/2308.11485v1
- Date: Tue, 22 Aug 2023 15:03:16 GMT
- Title: Composed Image Retrieval using Contrastive Learning and Task-oriented
CLIP-based Features
- Authors: Alberto Baldrati, Marco Bertini, Tiberio Uricchio, Alberto del Bimbo
- Abstract summary: Given a query composed of a reference image and a relative caption, the Composed Image Retrieval goal is to retrieve images visually similar to the reference one.
We use features from the OpenAI CLIP model to tackle the considered task.
We train a Combiner network that learns to combine the image-text features integrating the bimodal information.
- Score: 32.138956674478116
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given a query composed of a reference image and a relative caption, the
Composed Image Retrieval goal is to retrieve images visually similar to the
reference one that integrates the modifications expressed by the caption. Given
that recent research has demonstrated the efficacy of large-scale vision and
language pre-trained (VLP) models in various tasks, we rely on features from
the OpenAI CLIP model to tackle the considered task. We initially perform a
task-oriented fine-tuning of both CLIP encoders using the element-wise sum of
visual and textual features. Then, in the second stage, we train a Combiner
network that learns to combine the image-text features integrating the bimodal
information and providing combined features used to perform the retrieval. We
use contrastive learning in both stages of training. Starting from the bare
CLIP features as a baseline, experimental results show that the task-oriented
fine-tuning and the carefully crafted Combiner network are highly effective and
outperform more complex state-of-the-art approaches on FashionIQ and CIRR, two
popular and challenging datasets for composed image retrieval. Code and
pre-trained models are available at https://github.com/ABaldrati/CLIP4Cir
Related papers
- Leveraging Cross-Modal Neighbor Representation for Improved CLIP Classification [54.96876797812238]
We present a novel CrOss-moDal nEighbor Representation(CODER) based on the distance structure between images and their neighbor texts.
The key to construct a high-quality CODER lies in how to create a vast amount of high-quality and diverse texts to match with images.
Experiment results across various datasets and models confirm CODER's effectiveness.
arXiv Detail & Related papers (2024-04-27T02:04:36Z) - Transformer based Multitask Learning for Image Captioning and Object
Detection [13.340784876489927]
This work introduces a novel multitask learning framework that combines image captioning and object detection into a joint model.
We propose TICOD, Transformer-based Image Captioning and Object detection model for jointly training both tasks.
Our model outperforms the baselines from image captioning literature by achieving a 3.65% improvement in BERTScore.
arXiv Detail & Related papers (2024-03-10T19:31:13Z) - Enhancing Image Retrieval : A Comprehensive Study on Photo Search using
the CLIP Mode [0.27195102129095]
Photo search has witnessed significant advancements with the introduction of CLIP (Contrastive Language-Image Pretraining) model.
This abstract summarizes the foundational principles of CLIP and highlights its potential impact on advancing the field of photo search.
arXiv Detail & Related papers (2024-01-24T17:35:38Z) - SILC: Improving Vision Language Pretraining with Self-Distillation [113.50400246862056]
We introduce SILC, a novel framework for vision language pretraining.
SILC improves image-text contrastive learning with the simple addition of local-to-global correspondence learning by self-distillation.
We show that distilling local image features from an exponential moving average (EMA) teacher model significantly improves model performance on dense predictions tasks like detection and segmentation.
arXiv Detail & Related papers (2023-10-20T08:44:47Z) - Non-Contrastive Learning Meets Language-Image Pre-Training [145.6671909437841]
We study the validity of non-contrastive language-image pre-training (nCLIP)
We introduce xCLIP, a multi-tasking framework combining CLIP and nCLIP, and show that nCLIP aids CLIP in enhancing feature semantics.
arXiv Detail & Related papers (2022-10-17T17:57:46Z) - COTS: Collaborative Two-Stream Vision-Language Pre-Training Model for
Cross-Modal Retrieval [59.15034487974549]
We propose a novel COllaborative Two-Stream vision-language pretraining model termed COTS for image-text retrieval.
Our COTS achieves the highest performance among all two-stream methods and comparable performance with 10,800X faster in inference.
Importantly, our COTS is also applicable to text-to-video retrieval, yielding new state-ofthe-art on the widely-used MSR-VTT dataset.
arXiv Detail & Related papers (2022-04-15T12:34:47Z) - CRIS: CLIP-Driven Referring Image Segmentation [71.56466057776086]
We propose an end-to-end CLIP-Driven Referring Image framework (CRIS)
CRIS resorts to vision-language decoding and contrastive learning for achieving the text-to-pixel alignment.
Our proposed framework significantly outperforms the state-of-the-art performance without any post-processing.
arXiv Detail & Related papers (2021-11-30T07:29:08Z) - Exploiting the relationship between visual and textual features in
social networks for image classification with zero-shot deep learning [0.0]
In this work, we propose a classifier ensemble based on the transferable learning capabilities of the CLIP neural network architecture.
Our experiments, based on image classification tasks according to the labels of the Places dataset, are performed by first considering only the visual part.
Considering the associated texts to the images can help to improve the accuracy depending on the goal.
arXiv Detail & Related papers (2021-07-08T10:54:59Z) - Scaling Up Visual and Vision-Language Representation Learning With Noisy
Text Supervision [57.031588264841]
We leverage a noisy dataset of over one billion image alt-text pairs, obtained without expensive filtering or post-processing steps.
A simple dual-encoder architecture learns to align visual and language representations of the image and text pairs using a contrastive loss.
We show that the scale of our corpus can make up for its noise and leads to state-of-the-art representations even with such a simple learning scheme.
arXiv Detail & Related papers (2021-02-11T10:08:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.