Improving Multimodal Datasets with Image Captioning
- URL: http://arxiv.org/abs/2307.10350v2
- Date: Thu, 26 Oct 2023 03:41:06 GMT
- Title: Improving Multimodal Datasets with Image Captioning
- Authors: Thao Nguyen, Samir Yitzhak Gadre, Gabriel Ilharco, Sewoong Oh, Ludwig
Schmidt
- Abstract summary: We study how generated captions can increase the utility of web-scraped datapoints with nondescript text.
Our experiments with using generated captions at DataComp's large scale (1.28B image-text pairs) offer insights into the limitations of synthetic text.
- Score: 65.74736570293622
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Massive web datasets play a key role in the success of large vision-language
models like CLIP and Flamingo. However, the raw web data is noisy, and existing
filtering methods to reduce noise often come at the expense of data diversity.
Our work focuses on caption quality as one major source of noise, and studies
how generated captions can increase the utility of web-scraped datapoints with
nondescript text. Through exploring different mixing strategies for raw and
generated captions, we outperform the best filtering method proposed by the
DataComp benchmark by 2% on ImageNet and 4% on average across 38 tasks, given a
candidate pool of 128M image-text pairs. Our best approach is also 2x better at
Flickr and MS-COCO retrieval. We then analyze what makes synthetic captions an
effective source of text supervision. In experimenting with different image
captioning models, we also demonstrate that the performance of a model on
standard image captioning benchmarks (e.g., NoCaps CIDEr) is not a reliable
indicator of the utility of the captions it generates for multimodal training.
Finally, our experiments with using generated captions at DataComp's large
scale (1.28B image-text pairs) offer insights into the limitations of synthetic
text, as well as the importance of image curation with increasing training data
quantity. The synthetic captions used in our experiments are now available on
HuggingFace.
Related papers
- Text Data-Centric Image Captioning with Interactive Prompts [20.48013600818985]
Supervised image captioning approaches have made great progress, but it is challenging to collect high-quality human-annotated image-text data.
This paper proposes a new Text data-centric approach with Interactive Prompts for image Captioning, named TIPCap.
arXiv Detail & Related papers (2024-03-28T07:43:49Z) - Filter & Align: Leveraging Human Knowledge to Curate Image-Text Data [31.507451966555383]
We present a novel algorithm that incorporates human knowledge on image-text alignment to guide filtering vast corpus of web-crawled image-text datasets.
We collect a diverse image-text dataset where each image is associated with multiple captions from various sources.
We train a reward model on these human-preference annotations to internalize the nuanced human understanding of image-text alignment.
arXiv Detail & Related papers (2023-12-11T05:57:09Z) - CapsFusion: Rethinking Image-Text Data at Scale [32.334143749598766]
We propose CapsFusion to consolidate and refine information from both web-based image-text pairs and synthetic captions.
Experiments show that CapsFusion captions exhibit remarkable all-round superiority over existing captions in terms of model performance.
arXiv Detail & Related papers (2023-10-31T15:31:39Z) - Sieve: Multimodal Dataset Pruning Using Image Captioning Models [11.362835828985494]
Vision-Language Models (VLMs) are pretrained on large, diverse, and noisy web-crawled datasets.
We argue that this approach suffers from multiple limitations including false positives and negatives due to CLIP's pretraining on noisy labels.
We propose a pruning signal, Sieve, that employs synthetic captions generated by image-captioning models pretrained on small, diverse, and well-aligned image-text pairs.
arXiv Detail & Related papers (2023-10-03T14:53:53Z) - Towards Better Multi-modal Keyphrase Generation via Visual Entity
Enhancement and Multi-granularity Image Noise Filtering [79.44443231700201]
Multi-modal keyphrase generation aims to produce a set of keyphrases that represent the core points of the input text-image pair.
The input text and image are often not perfectly matched, and thus the image may introduce noise into the model.
We propose a novel multi-modal keyphrase generation model, which not only enriches the model input with external knowledge, but also effectively filters image noise.
arXiv Detail & Related papers (2023-09-09T09:41:36Z) - ALIP: Adaptive Language-Image Pre-training with Synthetic Caption [78.93535202851278]
Contrastive Language-Image Pre-training (CLIP) has significantly boosted the performance of various vision-language tasks.
The presence of intrinsic noise and unmatched image-text pairs in web data can potentially affect the performance of representation learning.
We propose an Adaptive Language-Image Pre-training (ALIP), a bi-path model that integrates supervision from both raw text and synthetic caption.
arXiv Detail & Related papers (2023-08-16T15:19:52Z) - Semi-Supervised Image Captioning by Adversarially Propagating Labeled
Data [95.0476489266988]
We present a novel data-efficient semi-supervised framework to improve the generalization of image captioning models.
Our proposed method trains a captioner to learn from a paired data and to progressively associate unpaired data.
Our extensive and comprehensive empirical results both on (1) image-based and (2) dense region-based captioning datasets followed by comprehensive analysis on the scarcely-paired dataset.
arXiv Detail & Related papers (2023-01-26T15:25:43Z) - Noise-aware Learning from Web-crawled Image-Text Data for Image
Captioning [6.101765622702223]
Noise-aware Captioning (NoC) framework learns rich knowledge from the whole web-crawled data while being less affected by the noises.
This is achieved by the proposed alignment-level-controllable captioner, which is learned using alignment levels of the image-text pairs as a control signal.
An in-depth analysis shows the effectiveness of our framework in handling noise.
arXiv Detail & Related papers (2022-12-27T17:33:40Z) - Generating More Pertinent Captions by Leveraging Semantics and Style on
Multi-Source Datasets [56.018551958004814]
This paper addresses the task of generating fluent descriptions by training on a non-uniform combination of data sources.
Large-scale datasets with noisy image-text pairs provide a sub-optimal source of supervision.
We propose to leverage and separate semantics and descriptive style through the incorporation of a style token and keywords extracted through a retrieval component.
arXiv Detail & Related papers (2021-11-24T19:00:05Z) - Intrinsic Image Captioning Evaluation [53.51379676690971]
We propose a learning based metrics for image captioning, which we call Intrinsic Image Captioning Evaluation(I2CE)
Experiment results show that our proposed method can keep robust performance and give more flexible scores to candidate captions when encountered with semantic similar expression or less aligned semantics.
arXiv Detail & Related papers (2020-12-14T08:36:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.