Better Captioning with Sequence-Level Exploration
- URL: http://arxiv.org/abs/2003.03749v1
- Date: Sun, 8 Mar 2020 09:08:03 GMT
- Title: Better Captioning with Sequence-Level Exploration
- Authors: Jia Chen, Qin Jin
- Abstract summary: We show the limitation of the current sequence-level learning objective for captioning tasks.
In theory, we show that the current objective is equivalent to only optimizing the precision side of the caption set.
Empirical result shows that the model trained by this objective tends to get lower score on the recall side.
- Score: 60.57850194028581
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sequence-level learning objective has been widely used in captioning tasks to
achieve the state-of-the-art performance for many models. In this objective,
the model is trained by the reward on the quality of its generated captions
(sequence-level). In this work, we show the limitation of the current
sequence-level learning objective for captioning tasks from both theory and
empirical result. In theory, we show that the current objective is equivalent
to only optimizing the precision side of the caption set generated by the model
and therefore overlooks the recall side. Empirical result shows that the model
trained by this objective tends to get lower score on the recall side. We
propose to add a sequence-level exploration term to the current objective to
boost recall. It guides the model to explore more plausible captions in the
training. In this way, the proposed objective takes both the precision and
recall sides of generated captions into account. Experiments show the
effectiveness of the proposed method on both video and image captioning
datasets.
Related papers
- Positive-Augmented Contrastive Learning for Vision-and-Language Evaluation and Training [44.008094698200026]
PAC-S++ is a learnable metric that leverages the CLIP model, pre-trained on both web-collected and cleaned data.
We show that integrating PAC-S++ into the fine-tuning stage of a captioning model results in semantically richer captions with fewer repetitions and grammatical errors.
arXiv Detail & Related papers (2024-10-09T18:00:09Z) - IG Captioner: Information Gain Captioners are Strong Zero-shot Classifiers [31.455819448471157]
Generative training has been demonstrated to be powerful for building visual-language models.
On zero-shot discriminative benchmarks, there is still a performance gap between models trained with generative and discriminative objectives.
In this paper, we aim to narrow this gap by improving the efficacy of generative training on classification tasks.
arXiv Detail & Related papers (2023-11-27T19:00:06Z) - BLIP-Adapter: Parameter-Efficient Transfer Learning for Mobile
Screenshot Captioning [0.5893124686141781]
This study proposes a combination of adapter methods, which necessitates tuning only the additional modules on the model.
By freezing the parameters of the image caption models and training only the weights associated with the methods, performance comparable to fine-tuning the entire model can be achieved.
arXiv Detail & Related papers (2023-09-26T09:16:44Z) - Helping Hands: An Object-Aware Ego-Centric Video Recognition Model [60.350851196619296]
We introduce an object-aware decoder for improving the performance of ego-centric representations on ego-centric videos.
We show that the model can act as a drop-in replacement for an ego-awareness video model to improve performance through visual-text grounding.
arXiv Detail & Related papers (2023-08-15T17:58:11Z) - FuseCap: Leveraging Large Language Models for Enriched Fused Image
Captions [11.274127953112574]
We propose an automated approach to augmenting existing captions with visual details using "frozen" vision experts.
Our proposed method, FuseCap, fuses the outputs of such vision experts with the original captions using a large language model.
We release this large-scale dataset of enriched image-caption pairs for the community.
arXiv Detail & Related papers (2023-05-28T13:16:03Z) - Zero-shot Visual Question Answering with Language Model Feedback [83.65140324876536]
We propose a language model guided captioning approach, LAMOC, for knowledge-based visual question answering (VQA)
Our approach employs the generated captions by a captioning model as the context of an answer prediction model, which is a Pre-trained Language model (PLM)
arXiv Detail & Related papers (2023-05-26T15:04:20Z) - Paraphrasing Is All You Need for Novel Object Captioning [126.66301869607656]
Novel object captioning (NOC) aims to describe images containing objects without observing their ground truth captions during training.
We present Paraphrasing-to-Captioning (P2C), a two-stage learning framework for NOC, which wouldally optimize the output captions via paraphrasing.
arXiv Detail & Related papers (2022-09-25T22:56:04Z) - Prompt-based Learning for Unpaired Image Captioning [86.44188293709307]
Unpaired Image Captioning (UIC) has been developed to learn image descriptions from unaligned vision-language sample pairs.
Recent successes of Vision-Language Pre-Trained Models (VL-PTMs) have triggered the development of prompt-based learning.
We present in this paper a novel scheme based on prompt to train the UIC model, making best use of the powerful generalization ability.
arXiv Detail & Related papers (2022-05-26T03:13:43Z) - VIVO: Visual Vocabulary Pre-Training for Novel Object Captioning [128.6138588412508]
This paper presents VIsual VOcabulary pretraining (VIVO) that performs pre-training in the absence of caption annotations.
Our model can not only generate fluent image captions that describe novel objects, but also identify the locations of these objects.
arXiv Detail & Related papers (2020-09-28T23:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.