SynthCLIP: Are We Ready for a Fully Synthetic CLIP Training?
- URL: http://arxiv.org/abs/2402.01832v2
- Date: Thu, 18 Jul 2024 10:21:29 GMT
- Title: SynthCLIP: Are We Ready for a Fully Synthetic CLIP Training?
- Authors: Hasan Abed Al Kader Hammoud, Hani Itani, Fabio Pizzati, Philip Torr, Adel Bibi, Bernard Ghanem,
- Abstract summary: We present SynthCLIP, a CLIP model trained on entirely synthetic text-image pairs.
We generate synthetic datasets of images and corresponding captions at scale, with no human intervention.
- Score: 57.42016037768947
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present SynthCLIP, a CLIP model trained on entirely synthetic text-image pairs. Leveraging recent text-to-image (TTI) networks and large language models (LLM), we generate synthetic datasets of images and corresponding captions at scale, with no human intervention. In this work, we provide an analysis on CLIP models trained on synthetic data. We provide insights on the data generation strategy, number of samples required, scaling trends, and resulting properties. We also introduce SynthCI-30M, a purely synthetic dataset comprising 30 million captioned images. Our code, trained models, and data, are released as open source at https://github.com/hammoudhasan/SynthCLIP
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.