Enhancing Few-shot CLIP with Semantic-Aware Fine-Tuning
- URL: http://arxiv.org/abs/2311.04464v3
- Date: Thu, 7 Dec 2023 04:35:24 GMT
- Title: Enhancing Few-shot CLIP with Semantic-Aware Fine-Tuning
- Authors: Yao Zhu, Yuefeng Chen, Wei Wang, Xiaofeng Mao, Xiu Yan, Yue Wang,
Zhigang Li, Wang lu, Jindong Wang, Xiangyang Ji
- Abstract summary: Methods based on Contrastive Language-Image Pre-training have exhibited promising performance in few-shot adaptation tasks.
We propose fine-tuning the parameters of the attention pooling layer during the training process to encourage the model to focus on task-specific semantics.
- Score: 61.902254546858465
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning generalized representations from limited training samples is crucial
for applying deep neural networks in low-resource scenarios. Recently, methods
based on Contrastive Language-Image Pre-training (CLIP) have exhibited
promising performance in few-shot adaptation tasks. To avoid catastrophic
forgetting and overfitting caused by few-shot fine-tuning, existing works
usually freeze the parameters of CLIP pre-trained on large-scale datasets,
overlooking the possibility that some parameters might not be suitable for
downstream tasks. To this end, we revisit CLIP's visual encoder with a specific
focus on its distinctive attention pooling layer, which performs a spatial
weighted-sum of the dense feature maps. Given that dense feature maps contain
meaningful semantic information, and different semantics hold varying
importance for diverse downstream tasks (such as prioritizing semantics like
ears and eyes in pet classification tasks rather than side mirrors), using the
same weighted-sum operation for dense features across different few-shot tasks
might not be appropriate. Hence, we propose fine-tuning the parameters of the
attention pooling layer during the training process to encourage the model to
focus on task-specific semantics. In the inference process, we perform residual
blending between the features pooled by the fine-tuned and the original
attention pooling layers to incorporate both the few-shot knowledge and the
pre-trained CLIP's prior knowledge. We term this method as Semantic-Aware
FinE-tuning (SAFE). SAFE is effective in enhancing the conventional few-shot
CLIP and is compatible with the existing adapter approach (termed SAFE-A).
Related papers
- SLCA++: Unleash the Power of Sequential Fine-tuning for Continual Learning with Pre-training [68.7896349660824]
We present an in-depth analysis of the progressive overfitting problem from the lens of Seq FT.
Considering that the overly fast representation learning and the biased classification layer constitute this particular problem, we introduce the advanced Slow Learner with Alignment (S++) framework.
Our approach involves a Slow Learner to selectively reduce the learning rate of backbone parameters, and a Alignment to align the disjoint classification layers in a post-hoc fashion.
arXiv Detail & Related papers (2024-08-15T17:50:07Z) - Continual Panoptic Perception: Towards Multi-modal Incremental Interpretation of Remote Sensing Images [16.0258685984844]
Continual learning (CL) breaks off the one-way training manner and enables a model to adapt to new data, semantics and tasks continuously.
We propose a unified continual learning model that leverages multi-task joint learning covering pixel-level classification, instance-level segmentation and image-level perception.
arXiv Detail & Related papers (2024-07-19T12:22:32Z) - Pay Attention to Your Neighbours: Training-Free Open-Vocabulary Semantic Segmentation [19.20874993309959]
vision-language foundation models, such as CLIP, have showcased remarkable effectiveness in numerous zero-shot image-level tasks.
We propose a baseline for training-free OVSS, termed Neighbour-Aware CLIP (NACLIP)
Our method enforces localization of patches in the self-attention of CLIP's vision transformer which, despite being crucial for dense prediction tasks, has been overlooked in the OVSS literature.
arXiv Detail & Related papers (2024-04-12T01:08:04Z) - Beyond Prototypes: Semantic Anchor Regularization for Better
Representation Learning [82.29761875805369]
One of the ultimate goals of representation learning is to achieve compactness within a class and well-separability between classes.
We propose a novel perspective to use pre-defined class anchors serving as feature centroid to unidirectionally guide feature learning.
The proposed Semantic Anchor Regularization (SAR) can be used in a plug-and-play manner in the existing models.
arXiv Detail & Related papers (2023-12-19T05:52:38Z) - A Closer Look at the Explainability of Contrastive Language-Image Pre-training [16.10032166963232]
Contrastive language-image pre-training (CLIP) is a powerful vision-language model that has shown great benefits for various tasks.
We have identified some issues with its explainability, which undermine its credibility and limit the capacity for related tasks.
We propose the CLIP Surgery for reliable CAM, a method that allows surgery-like modifications to the inference architecture and features.
arXiv Detail & Related papers (2023-04-12T07:16:55Z) - Semantics-Depth-Symbiosis: Deeply Coupled Semi-Supervised Learning of
Semantics and Depth [83.94528876742096]
We tackle the MTL problem of two dense tasks, ie, semantic segmentation and depth estimation, and present a novel attention module called Cross-Channel Attention Module (CCAM)
In a true symbiotic spirit, we then formulate a novel data augmentation for the semantic segmentation task using predicted depth called AffineMix, and a simple depth augmentation using predicted semantics called ColorAug.
Finally, we validate the performance gain of the proposed method on the Cityscapes dataset, which helps us achieve state-of-the-art results for a semi-supervised joint model based on depth and semantic
arXiv Detail & Related papers (2022-06-21T17:40:55Z) - Dense Contrastive Visual-Linguistic Pretraining [53.61233531733243]
Several multimodal representation learning approaches have been proposed that jointly represent image and text.
These approaches achieve superior performance by capturing high-level semantic information from large-scale multimodal pretraining.
We propose unbiased Dense Contrastive Visual-Linguistic Pretraining to replace the region regression and classification with cross-modality region contrastive learning.
arXiv Detail & Related papers (2021-09-24T07:20:13Z) - CSS-LM: A Contrastive Framework for Semi-supervised Fine-tuning of
Pre-trained Language Models [59.49705076369856]
We introduce a novel framework to improve the fine-tuning phase of pre-trained language models (PLMs)
We retrieve positive and negative instances from large-scale unlabeled corpora according to their domain-level and class-level semantic relatedness to a task.
We then perform contrastive semi-supervised learning on both the retrieved unlabeled and original labeled instances to help PLMs capture crucial task-related semantic features.
arXiv Detail & Related papers (2021-02-07T09:27:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.