On Distinctive Image Captioning via Comparing and Reweighting
- URL: http://arxiv.org/abs/2204.03938v1
- Date: Fri, 8 Apr 2022 08:59:23 GMT
- Title: On Distinctive Image Captioning via Comparing and Reweighting
- Authors: Jiuniu Wang, Wenjia Xu, Qingzhong Wang, Antoni B. Chan
- Abstract summary: In this paper, we aim to improve the distinctiveness of image captions via comparing and reweighting with a set of similar images.
Our metric reveals that the human annotations of each image in the MSCOCO dataset are not equivalent based on distinctiveness.
In contrast, previous works normally treat the human annotations equally during training, which could be a reason for generating less distinctive captions.
- Score: 52.3731631461383
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent image captioning models are achieving impressive results based on
popular metrics, i.e., BLEU, CIDEr, and SPICE. However, focusing on the most
popular metrics that only consider the overlap between the generated captions
and human annotation could result in using common words and phrases, which
lacks distinctiveness, i.e., many similar images have the same caption. In this
paper, we aim to improve the distinctiveness of image captions via comparing
and reweighting with a set of similar images. First, we propose a
distinctiveness metric -- between-set CIDEr (CIDErBtw) to evaluate the
distinctiveness of a caption with respect to those of similar images. Our
metric reveals that the human annotations of each image in the MSCOCO dataset
are not equivalent based on distinctiveness; however, previous works normally
treat the human annotations equally during training, which could be a reason
for generating less distinctive captions. In contrast, we reweight each
ground-truth caption according to its distinctiveness during training. We
further integrate a long-tailed weight strategy to highlight the rare words
that contain more information, and captions from the similar image set are
sampled as negative examples to encourage the generated sentence to be unique.
Finally, extensive experiments are conducted, showing that our proposed
approach significantly improves both distinctiveness (as measured by CIDErBtw
and retrieval metrics) and accuracy (e.g., as measured by CIDEr) for a wide
variety of image captioning baselines. These results are further confirmed
through a user study.
Related papers
- Fluent and Accurate Image Captioning with a Self-Trained Reward Model [47.213906345208315]
We propose Self-Cap, a captioning approach that relies on a learnable reward model based on self-generated negatives.
Our discriminator is a fine-tuned contrastive image-text model trained to promote caption correctness.
arXiv Detail & Related papers (2024-08-29T18:00:03Z) - Transparent Human Evaluation for Image Captioning [70.03979566548823]
We develop a rubric-based human evaluation protocol for image captioning models.
We show that human-generated captions show substantially higher quality than machine-generated ones.
We hope that this work will promote a more transparent evaluation protocol for image captioning.
arXiv Detail & Related papers (2021-11-17T07:09:59Z) - Group-based Distinctive Image Captioning with Memory Attention [45.763534774116856]
Group-based Distinctive Captioning Model (GdisCap) improves the distinctiveness of image captions.
New evaluation metric, distinctive word rate (DisWordRate) is proposed to measure the distinctiveness of captions.
arXiv Detail & Related papers (2021-08-20T12:46:36Z) - Contrastive Semantic Similarity Learning for Image Captioning Evaluation
with Intrinsic Auto-encoder [52.42057181754076]
Motivated by the auto-encoder mechanism and contrastive representation learning advances, we propose a learning-based metric for image captioning.
We develop three progressive model structures to learn the sentence level representations.
Experiment results show that our proposed method can align well with the scores generated from other contemporary metrics.
arXiv Detail & Related papers (2021-06-29T12:27:05Z) - Intrinsic Image Captioning Evaluation [53.51379676690971]
We propose a learning based metrics for image captioning, which we call Intrinsic Image Captioning Evaluation(I2CE)
Experiment results show that our proposed method can keep robust performance and give more flexible scores to candidate captions when encountered with semantic similar expression or less aligned semantics.
arXiv Detail & Related papers (2020-12-14T08:36:05Z) - Towards Unique and Informative Captioning of Images [40.036350846970706]
We analyze both modern captioning systems and evaluation metrics.
We design a new metric (SPICE) by introducing a notion of uniqueness over the concepts generated in a caption.
We show that SPICE-U is better correlated with human judgements compared to SPICE, and effectively captures notions of diversity and descriptiveness.
arXiv Detail & Related papers (2020-09-08T19:01:33Z) - Compare and Reweight: Distinctive Image Captioning Using Similar Images
Sets [52.3731631461383]
We aim to improve the distinctiveness of image captions through training with sets of similar images.
Our metric shows that the human annotations of each image are not equivalent based on distinctiveness.
arXiv Detail & Related papers (2020-07-14T07:40:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.