Rethinking Misalignment in Vision-Language Model Adaptation from a Causal Perspective
- URL: http://arxiv.org/abs/2410.12816v2
- Date: Tue, 05 Nov 2024 02:27:30 GMT
- Title: Rethinking Misalignment in Vision-Language Model Adaptation from a Causal Perspective
- Authors: Yanan Zhang, Jiangmeng Li, Lixiang Liu, Wenwen Qiang,
- Abstract summary: We propose Causality-Guided Semantic Decoupling and Classification to mitigate the interference of task-irrelevant knowledge.
We employ the Dempster-Shafer evidence theory to evaluate the uncertainty of each prediction generated by diverse semantics.
- Score: 13.56923651751788
- License:
- Abstract: Foundational Vision-Language models such as CLIP have exhibited impressive generalization in downstream tasks. However, CLIP suffers from a two-level misalignment issue, i.e., task misalignment and data misalignment, when adapting to specific tasks. Soft prompt tuning has mitigated the task misalignment, yet the data misalignment remains a challenge. To analyze the impacts of the data misalignment, we revisit the pre-training and adaptation processes of CLIP and develop a structural causal model. We discover that while we expect to capture task-relevant information for downstream tasks accurately, the task-irrelevant knowledge impacts the prediction results and hampers the modeling of the true relationships between the images and the predicted classes. As task-irrelevant knowledge is unobservable, we leverage the front-door adjustment and propose Causality-Guided Semantic Decoupling and Classification (CDC) to mitigate the interference of task-irrelevant knowledge. Specifically, we decouple semantics contained in the data of downstream tasks and perform classification based on each semantic. Furthermore, we employ the Dempster-Shafer evidence theory to evaluate the uncertainty of each prediction generated by diverse semantics. Experiments conducted in multiple different settings have consistently demonstrated the effectiveness of CDC.
Related papers
- Adaptive Rentention & Correction for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - A Unified Causal View of Instruction Tuning [76.1000380429553]
We develop a meta Structural Causal Model (meta-SCM) to integrate different NLP tasks under a single causal structure of the data.
Key idea is to learn task-required causal factors and only use those to make predictions for a given task.
arXiv Detail & Related papers (2024-02-09T07:12:56Z) - Distributed Continual Learning with CoCoA in High-dimensional Linear
Regression [0.0]
We consider estimation under scenarios where the signals of interest exhibit change of characteristics over time.
In particular, we consider the continual learning problem where different tasks, e.g., data with different distributions, arrive sequentially.
We consider the well-established distributed learning algorithm COCOA, which distributes the model parameters and the corresponding features over the network.
arXiv Detail & Related papers (2023-12-04T10:35:46Z) - Enhancing Few-shot CLIP with Semantic-Aware Fine-Tuning [61.902254546858465]
Methods based on Contrastive Language-Image Pre-training have exhibited promising performance in few-shot adaptation tasks.
We propose fine-tuning the parameters of the attention pooling layer during the training process to encourage the model to focus on task-specific semantics.
arXiv Detail & Related papers (2023-11-08T05:18:57Z) - Mitigating Catastrophic Forgetting in Task-Incremental Continual
Learning with Adaptive Classification Criterion [50.03041373044267]
We propose a Supervised Contrastive learning framework with adaptive classification criterion for Continual Learning.
Experiments show that CFL achieves state-of-the-art performance and has a stronger ability to overcome compared with the classification baselines.
arXiv Detail & Related papers (2023-05-20T19:22:40Z) - Masked Reconstruction Contrastive Learning with Information Bottleneck
Principle [9.136962881499734]
Contrastive learning (CL) has shown great power in self-supervised learning.
Current CL models are biased to learn only the ability to discriminate positive and negative pairs.
We propose the Masked Reconstruction Contrastive Learning(MRCL) model to improve CL models.
arXiv Detail & Related papers (2022-11-15T15:20:52Z) - Understanding and Mitigating Overfitting in Prompt Tuning for
Vision-Language Models [108.13378788663196]
We propose Subspace Prompt Tuning (SubPT) to project the gradients in back-propagation onto the low-rank subspace spanned by the early-stage gradient flow eigenvectors during the entire training process.
We equip CoOp with Novel Learner Feature (NFL) to enhance the generalization ability of the learned prompts onto novel categories beyond the training set.
arXiv Detail & Related papers (2022-11-04T02:06:22Z) - Complementary Calibration: Boosting General Continual Learning with
Collaborative Distillation and Self-Supervision [47.374412281270594]
General Continual Learning (GCL) aims at learning from non independent and identically distributed stream data.
We reveal that the relation and feature deviations are crucial problems for catastrophic forgetting.
We propose a Complementary (CoCa) framework by mining the complementary model's outputs and features.
arXiv Detail & Related papers (2021-09-03T06:35:27Z) - Robust Learning Through Cross-Task Consistency [92.42534246652062]
We propose a broadly applicable and fully computational method for augmenting learning with Cross-Task Consistency.
We observe that learning with cross-task consistency leads to more accurate predictions and better generalization to out-of-distribution inputs.
arXiv Detail & Related papers (2020-06-07T09:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.