Do CLIPs Always Generalize Better than ImageNet Models?
- URL: http://arxiv.org/abs/2403.11497v1
- Date: Mon, 18 Mar 2024 06:04:02 GMT
- Title: Do CLIPs Always Generalize Better than ImageNet Models?
- Authors: Qizhou Wang, Yong Lin, Yongqiang Chen, Ludwig Schmidt, Bo Han, Tong Zhang,
- Abstract summary: Large vision language models, such as CLIPs, have revolutionized modern machine learning.
We show that CLIPs trained on either LAION or the OpenAI data exhibit notable performance drops on the counter group.
Surprisingly, we observe that single-modal models trained on ImageNet are more robust than CLIPs.
- Score: 45.87070442259975
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Large vision language models, such as CLIPs, have revolutionized modern machine learning. CLIPs have demonstrated great generalizability under distribution shifts, supported by an increasing body of literature. However, the evaluation datasets for CLIPs are variations primarily designed for ImageNet benchmarks, which may not fully reflect the extent to which CLIPs, e.g., pre-trained on LAION, robust to spurious correlations. To bridge the gap, we collect a real-world dataset called CounterAnimal that contains realistic spurious features found in animal photos. CounterAnimal consists of a) the common group: comprising animals on common backgrounds, and b) the counter group: including animals on unusual backgrounds. The performance drops from the common to counter groups quantify the reliance of models on spurious features (i.e., backgrounds) to predict the animals. We find that CLIPs trained on either LAION or the OpenAI data exhibit notable performance drops on the counter group. Surprisingly, we observe that single-modal models trained on ImageNet are more robust than CLIPs. We provide both theoretical and empirical explanations for why CLIPs still learn spurious features. Our findings suggest that distribution shifts remain an open problem for CLIPs, and one needs to be cautious about test setups when evaluating foundation models pre-trained on a significantly different scale and distribution.
Related papers
- Deciphering the Role of Representation Disentanglement: Investigating Compositional Generalization in CLIP Models [3.9326597037266455]
Compositional Out of Distribution (C-OoD) generalization is relatively unexplored for CLIP models.
Our study reveals that the disentanglement of image and text representations, particularly with respect to their compositional elements, plays a crucial role in improving the generalization of CLIP models.
arXiv Detail & Related papers (2024-07-08T13:04:40Z) - Generalization Beyond Data Imbalance: A Controlled Study on CLIP for Transferable Insights [67.72413262980272]
Severe data imbalance naturally exists among web-scale vision-language datasets.
We find CLIP pre-trained thereupon exhibits notable robustness to the data imbalance compared to supervised learning.
The robustness and discriminability of CLIP improve with more descriptive language supervision, larger data scale, and broader open-world concepts.
arXiv Detail & Related papers (2024-05-31T17:57:24Z) - Language Plays a Pivotal Role in the Object-Attribute Compositional Generalization of CLIP [3.5999252362400993]
We study whether vision-language models can successfully classify images with novel compositions of attribute-object pairs.
We found that CLIPs trained with large datasets such as OpenAI CLIP, LAION-400M, and LAION-2B show orders-of-magnitude improvement in effective compositional OoD generalization.
Our results provide evidence that the scale and diversity of training data and language supervision play a key role in unlocking the compositional generalization abilities of vision-language models.
arXiv Detail & Related papers (2024-03-27T12:59:44Z) - CLIP meets Model Zoo Experts: Pseudo-Supervision for Visual Enhancement [65.47237619200442]
Contrastive language image pretraining (CLIP) is a standard method for training vision-language models.
We augment CLIP training with task-specific vision models from model zoos to improve its visual representations.
This simple setup shows substantial improvements of up to 16.3% across different vision tasks.
arXiv Detail & Related papers (2023-10-21T20:20:13Z) - Does CLIP's Generalization Performance Mainly Stem from High Train-Test Similarity? [21.556266136890844]
Foundation models like CLIP are trained on hundreds of millions of samples and effortlessly generalize to new tasks and inputs.
We show that high train-test similarity is insufficient to explain CLIP's OOD performance.
arXiv Detail & Related papers (2023-10-14T11:24:28Z) - CLIPood: Generalizing CLIP to Out-of-Distributions [73.86353105017076]
Contrastive language-image pre-training (CLIP) models have shown impressive zero-shot ability, but the further adaptation of CLIP on downstream tasks undesirably degrades OOD performances.
We propose CLIPood, a fine-tuning method that can adapt CLIP models to OOD situations where both domain shifts and open classes may occur on unseen test data.
Experiments on diverse datasets with different OOD scenarios show that CLIPood consistently outperforms existing generalization techniques.
arXiv Detail & Related papers (2023-02-02T04:27:54Z) - Non-Contrastive Learning Meets Language-Image Pre-Training [145.6671909437841]
We study the validity of non-contrastive language-image pre-training (nCLIP)
We introduce xCLIP, a multi-tasking framework combining CLIP and nCLIP, and show that nCLIP aids CLIP in enhancing feature semantics.
arXiv Detail & Related papers (2022-10-17T17:57:46Z) - Democratizing Contrastive Language-Image Pre-training: A CLIP Benchmark
of Data, Model, and Supervision [26.13829720290035]
Contrastive Language-Image Pretraining (CLIP) has emerged as a novel paradigm to learn visual models from language supervision.
We propose CLIP-benchmark, a first attempt to evaluate, analyze, and benchmark CLIP and its variants.
arXiv Detail & Related papers (2022-03-11T08:41:00Z) - The CLEAR Benchmark: Continual LEArning on Real-World Imagery [77.98377088698984]
Continual learning (CL) is widely regarded as crucial challenge for lifelong AI.
We introduce CLEAR, the first continual image classification benchmark dataset with a natural temporal evolution of visual concepts.
We find that a simple unsupervised pre-training step can already boost state-of-the-art CL algorithms.
arXiv Detail & Related papers (2022-01-17T09:09:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.