Exploring Annotation-free Image Captioning with Retrieval-augmented
Pseudo Sentence Generation
- URL: http://arxiv.org/abs/2307.14750v2
- Date: Fri, 28 Jul 2023 05:53:33 GMT
- Title: Exploring Annotation-free Image Captioning with Retrieval-augmented
Pseudo Sentence Generation
- Authors: Zhiyuan Li and Dongnan Liu and Heng Wang and Chaoyi Zhang and Weidong
Cai
- Abstract summary: We introduce Retrieval-augmented Pseudo Sentence Generation (RaPSG) to train captioners without annotated image-sentence pairs.
RaPSG retrieves relevant short region descriptions from mismatching corpora and uses them to generate a variety of pseudo sentences with distinct representations.
We show that our method surpasses the SOTA pre-training model (Flamingo3B) by achieving a CIDEr score of 78.1 (+5.1) while utilizing only 0.3% of its trainable parameters.
- Score: 23.54149252498897
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training an image captioner without annotated image-sentence pairs has gained
traction in recent years. Previous approaches can be categorized into two
strategies: crawling sentences from mismatching corpora and aligning them with
the given images as pseudo annotations, or pre-training the captioner using
external image-text pairs. However, the aligning setting seems to reach its
performance limit due to the quality problem of pairs, and pre-training
requires significant computational resources. To address these challenges, we
propose a new strategy ``LPM + retrieval-augmented learning" where the prior
knowledge from large pre-trained models (LPMs) is leveraged as supervision, and
a retrieval process is integrated to further reinforce its effectiveness.
Specifically, we introduce Retrieval-augmented Pseudo Sentence Generation
(RaPSG), which adopts an efficient approach to retrieve highly relevant short
region descriptions from the mismatching corpora and use them to generate a
variety of pseudo sentences with distinct representations as well as high
quality via LPMs. In addition, a fluency filter and a CLIP-guided training
objective are further introduced to facilitate model optimization. Experimental
results demonstrate that our method surpasses the SOTA pre-training model
(Flamingo3B) by achieving a CIDEr score of 78.1 (+5.1) while utilizing only
0.3% of its trainable parameters (1.3B VS 33M). Importantly, our approach
eliminates the need of computationally expensive pre-training processes on
external datasets (e.g., the requirement of 312M image-text pairs for
Flamingo3B). We further show that with a simple extension, the generated pseudo
sentences can be deployed as weak supervision to boost the 1% semi-supervised
image caption benchmark up to 93.4 CIDEr score (+8.9) which showcases the
versatility and effectiveness of our approach.
Related papers
- Towards Better Alignment: Training Diffusion Models with Reinforcement Learning Against Sparse Rewards [52.90573877727541]
reinforcement learning (RL) has been considered for diffusion model fine-tuning.
RL's effectiveness is limited by the challenge of sparse reward.
$textB2text-DiffuRL$ is compatible with existing optimization algorithms.
arXiv Detail & Related papers (2025-03-14T09:45:19Z) - Grounding Descriptions in Images informs Zero-Shot Visual Recognition [47.66166611138081]
We propose GRAIN, a new pretraining strategy aimed at aligning representations at both fine and coarse levels simultaneously.
We demonstrate the enhanced zero-shot performance of our model compared to current state-of-the art methods.
arXiv Detail & Related papers (2024-12-05T18:52:00Z) - Debiasing Multimodal Large Language Models [61.6896704217147]
Large Vision-Language Models (LVLMs) have become indispensable tools in computer vision and natural language processing.
Our investigation reveals a noteworthy bias in the generated content, where the output is primarily influenced by the underlying Large Language Models (LLMs) prior to the input image.
To rectify these biases and redirect the model's focus toward vision information, we introduce two simple, training-free strategies.
arXiv Detail & Related papers (2024-03-08T12:35:07Z) - Distinctive Image Captioning: Leveraging Ground Truth Captions in CLIP
Guided Reinforcement Learning [9.443456804893207]
Reinforcement Learning (RL) allows to use cross-modal retrieval similarity score between the generated caption and the input image as reward to guide the training.
Recent studies show that pre-trained cross-modal retrieval models can be used to provide this reward, completely eliminating the need for reference captions.
We propose a new image captioning training strategy that makes use of GT captions in different ways.
arXiv Detail & Related papers (2024-02-21T17:05:06Z) - Mining Fine-Grained Image-Text Alignment for Zero-Shot Captioning via
Text-Only Training [14.340740609933437]
We propose a novel zero-shot image captioning framework with text-only training to reduce the modality gap.
In particular, we introduce a subregion feature aggregation to leverage local region information.
We extend our framework to build a zero-shot VQA pipeline, demonstrating its generality.
arXiv Detail & Related papers (2024-01-04T16:43:46Z) - ASPIRE: Language-Guided Data Augmentation for Improving Robustness Against Spurious Correlations [43.323791505213634]
ASPIRE (Language-guided Data Augmentation for SPurIous correlation REmoval) is a solution for supplementing the training dataset with images without spurious features.
It can generate non-spurious images without requiring any group labeling or existing non-spurious images in the training set.
It improves the worst-group classification accuracy of prior methods by 1% - 38%.
arXiv Detail & Related papers (2023-08-19T20:18:15Z) - MOCA: Self-supervised Representation Learning by Predicting Masked Online Codebook Assignments [72.6405488990753]
Self-supervised learning can be used for mitigating the greedy needs of Vision Transformer networks.
We propose a single-stage and standalone method, MOCA, which unifies both desired properties.
We achieve new state-of-the-art results on low-shot settings and strong experimental results in various evaluation protocols.
arXiv Detail & Related papers (2023-07-18T15:46:20Z) - DisCLIP: Open-Vocabulary Referring Expression Generation [37.789850573203694]
We build on CLIP, a large-scale visual-semantic model, to guide an LLM to generate a contextual description of a target concept in an image.
We measure the quality of the generated text by evaluating the capability of a receiver model to accurately identify the described object within the scene.
Our results highlight the potential of using pre-trained visual-semantic models for generating high-quality contextual descriptions.
arXiv Detail & Related papers (2023-05-30T15:13:17Z) - Alleviating Over-smoothing for Unsupervised Sentence Representation [96.19497378628594]
We present a Simple method named Self-Contrastive Learning (SSCL) to alleviate this issue.
Our proposed method is quite simple and can be easily extended to various state-of-the-art models for performance boosting.
arXiv Detail & Related papers (2023-05-09T11:00:02Z) - Three ways to improve feature alignment for open vocabulary detection [88.65076922242184]
Key problem in zero-shot open vocabulary detection is how to align visual and text features, so that the detector performs well on unseen classes.
Previous approaches train the feature pyramid and detection head from scratch, which breaks the vision-text feature alignment established during pretraining.
We propose three methods to alleviate these issues. Firstly, a simple scheme is used to augment the text embeddings which prevents overfitting to a small number of classes seen during training.
Secondly, the feature pyramid network and the detection head are modified to include trainable shortcuts.
Finally, a self-training approach is used to leverage a larger corpus of
arXiv Detail & Related papers (2023-03-23T17:59:53Z) - Prompt-based Learning for Unpaired Image Captioning [86.44188293709307]
Unpaired Image Captioning (UIC) has been developed to learn image descriptions from unaligned vision-language sample pairs.
Recent successes of Vision-Language Pre-Trained Models (VL-PTMs) have triggered the development of prompt-based learning.
We present in this paper a novel scheme based on prompt to train the UIC model, making best use of the powerful generalization ability.
arXiv Detail & Related papers (2022-05-26T03:13:43Z) - Dense Contrastive Visual-Linguistic Pretraining [53.61233531733243]
Several multimodal representation learning approaches have been proposed that jointly represent image and text.
These approaches achieve superior performance by capturing high-level semantic information from large-scale multimodal pretraining.
We propose unbiased Dense Contrastive Visual-Linguistic Pretraining to replace the region regression and classification with cross-modality region contrastive learning.
arXiv Detail & Related papers (2021-09-24T07:20:13Z) - ColdGANs: Taming Language GANs with Cautious Sampling Strategies [29.943949944682196]
Generative Adversarial Networks (GANs) can mitigate limitations but the discrete nature of text has hindered their application to language generation.
We show how classical sampling results in unstable training.
We propose to consider alternative exploration strategies in a GAN framework that we name ColdGANs, where we force the sampling to be close to the distribution modes to get smoother learning dynamics.
For the first time, to the best of our knowledge, the proposed language GANs compare favorably to MLE, and obtain improvements over the state-of-the-art on three generative tasks.
arXiv Detail & Related papers (2020-06-08T14:48:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.