Fine-Tuning Pre-trained Language Models for Robust Causal Representation Learning
- URL: http://arxiv.org/abs/2410.14375v1
- Date: Fri, 18 Oct 2024 11:06:23 GMT
- Title: Fine-Tuning Pre-trained Language Models for Robust Causal Representation Learning
- Authors: Jialin Yu, Yuxiang Zhou, Yulan He, Nevin L. Zhang, Ricardo Silva,
- Abstract summary: Fine-tuning of pre-trained language models (PLMs) has been shown to be effective across various domains.
We show that a robust representation can be derived through a so-called causal front-door adjustment, based on a decomposition assumption.
Our work thus sheds light on the domain generalization problem by introducing links between fine-tuning and causal mechanisms into representation learning.
- Score: 26.29386609645171
- License:
- Abstract: The fine-tuning of pre-trained language models (PLMs) has been shown to be effective across various domains. By using domain-specific supervised data, the general-purpose representation derived from PLMs can be transformed into a domain-specific representation. However, these methods often fail to generalize to out-of-domain (OOD) data due to their reliance on non-causal representations, often described as spurious features. Existing methods either make use of adjustments with strong assumptions about lack of hidden common causes, or mitigate the effect of spurious features using multi-domain data. In this work, we investigate how fine-tuned pre-trained language models aid generalizability from single-domain scenarios under mild assumptions, targeting more general and practical real-world scenarios. We show that a robust representation can be derived through a so-called causal front-door adjustment, based on a decomposition assumption, using fine-tuned representations as a source of data augmentation. Comprehensive experiments in both synthetic and real-world settings demonstrate the superior generalizability of the proposed method compared to existing approaches. Our work thus sheds light on the domain generalization problem by introducing links between fine-tuning and causal mechanisms into representation learning.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.