ReCLIP: Refine Contrastive Language Image Pre-Training with Source Free
Domain Adaptation
- URL: http://arxiv.org/abs/2308.03793v2
- Date: Thu, 14 Dec 2023 03:55:55 GMT
- Title: ReCLIP: Refine Contrastive Language Image Pre-Training with Source Free
Domain Adaptation
- Authors: Xuefeng Hu, Ke Zhang, Lu Xia, Albert Chen, Jiajia Luo, Yuyin Sun, Ken
Wang, Nan Qiao, Xiao Zeng, Min Sun, Cheng-Hao Kuo, Ram Nevatia
- Abstract summary: ReCLIP is a source-free domain adaptation method for vision-language models.
We demonstrate ReCLIP reduces the average error rate of CLIP from 30.17% to 25.06% on 22 image classification benchmarks.
- Score: 20.57370550156505
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large-scale Pre-Training Vision-Language Model such as CLIP has demonstrated
outstanding performance in zero-shot classification, e.g. achieving 76.3% top-1
accuracy on ImageNet without seeing any example, which leads to potential
benefits to many tasks that have no labeled data. However, while applying CLIP
to a downstream target domain, the presence of visual and text domain gaps and
cross-modality misalignment can greatly impact the model performance. To
address such challenges, we propose ReCLIP, the first source-free domain
adaptation method for vision-language models, which does not require any source
data or target labeled data. ReCLIP first learns a projection space to mitigate
the misaligned visual-text embeddings and learns pseudo labels, and then
deploys cross-modality self-training with the pseudo labels, to update visual
and text encoders, refine labels and reduce domain gaps and misalignments
iteratively. With extensive experiments, we demonstrate ReCLIP reduces the
average error rate of CLIP from 30.17% to 25.06% on 22 image classification
benchmarks. Code available at https://github.com/michiganleon/ReCLIP_WACV.
Related papers
- Enhancing Zero-Shot Vision Models by Label-Free Prompt Distribution Learning and Bias Correcting [55.361337202198925]
Vision-language models, such as CLIP, have shown impressive generalization capacities when using appropriate text descriptions.
We propose a label-Free prompt distribution learning and bias correction framework, dubbed as **Frolic**, which boosts zero-shot performance without the need for labeled data.
arXiv Detail & Related papers (2024-10-25T04:00:45Z) - CLIP meets Model Zoo Experts: Pseudo-Supervision for Visual Enhancement [65.47237619200442]
Contrastive language image pretraining (CLIP) is a standard method for training vision-language models.
We augment CLIP training with task-specific vision models from model zoos to improve its visual representations.
This simple setup shows substantial improvements of up to 16.3% across different vision tasks.
arXiv Detail & Related papers (2023-10-21T20:20:13Z) - VeCLIP: Improving CLIP Training via Visual-enriched Captions [63.547204530720705]
This study introduces a scalable pipeline for noisy caption rewriting.
We emphasize the incorporation of visual concepts into captions, termed as Visual-enriched Captions (VeCap)
We showcase the adaptation of this method for training CLIP on large-scale web-crawled datasets, termed VeCLIP.
arXiv Detail & Related papers (2023-10-11T17:49:13Z) - CLIP-VG: Self-paced Curriculum Adapting of CLIP for Visual Grounding [91.97362831507434]
Unsupervised visual grounding has been developed to locate regions using pseudo-labels.
We propose CLIP-VG, a novel method that can conduct self-paced curriculum adapting of CLIP with pseudo-language labels.
Our method outperforms the current state-of-the-art unsupervised method by a significant margin on RefCOCO/+/g datasets.
arXiv Detail & Related papers (2023-05-15T14:42:02Z) - Less is More: Removing Text-regions Improves CLIP Training Efficiency
and Robustness [19.77762574325687]
The CLIP (Contrastive Language-Image Pre-training) model and its variants are becoming the de facto backbone in many applications.
We discuss two effective approaches to improve the efficiency and robustness of CLIP training.
Our filter-based CLIP model demonstrates a top-1 accuracy of 68.78%, outperforming previous models whose accuracy was all below 50%.
arXiv Detail & Related papers (2023-05-08T23:47:07Z) - Improving Zero-Shot Models with Label Distribution Priors [33.51714665243138]
We propose a new approach, CLIPPR, which adapts zero-shot models for regression and classification on unlabelled datasets.
We demonstrate an improvement of 28% in mean absolute error on the UTK age regression task.
We also present promising results for classification benchmarks, improving the classification accuracy on the ImageNet dataset by 2.83%, without using any labels.
arXiv Detail & Related papers (2022-12-01T18:59:03Z) - Non-Contrastive Learning Meets Language-Image Pre-Training [145.6671909437841]
We study the validity of non-contrastive language-image pre-training (nCLIP)
We introduce xCLIP, a multi-tasking framework combining CLIP and nCLIP, and show that nCLIP aids CLIP in enhancing feature semantics.
arXiv Detail & Related papers (2022-10-17T17:57:46Z) - Masked Unsupervised Self-training for Zero-shot Image Classification [98.23094305347709]
Masked Unsupervised Self-Training (MUST) is a new approach which leverages two different and complimentary sources of supervision: pseudo-labels and raw images.
MUST improves upon CLIP by a large margin and narrows the performance gap between unsupervised and supervised classification.
arXiv Detail & Related papers (2022-06-07T02:03:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.