On the Transferability of Visually Grounded PCFGs
- URL: http://arxiv.org/abs/2310.14107v1
- Date: Sat, 21 Oct 2023 20:19:51 GMT
- Title: On the Transferability of Visually Grounded PCFGs
- Authors: Yanpeng Zhao, Ivan Titov
- Abstract summary: Visually-grounded Compound PCFGcitepzhao-titov-2020-visually.
We consider a zero-shot transfer learning setting where a model is trained on the source domain and is directly applied to target domains, without any further training.
Our experimental results suggest that: the benefits from using visual groundings transfer to text in a domain similar to the training domain but fail to transfer to remote domains.
- Score: 35.64371385720051
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: There has been a significant surge of interest in visually grounded grammar
induction in recent times. While a variety of models have been developed for
the task and have demonstrated impressive performance, they have not been
evaluated on text domains that are different from the training domain, so it is
unclear if the improvements brought by visual groundings are transferable. Our
study aims to fill this gap and assess the degree of transferability. We start
by extending VC-PCFG (short for Visually-grounded Compound
PCFG~\citep{zhao-titov-2020-visually}) in such a way that it can transfer
across text domains. We consider a zero-shot transfer learning setting where a
model is trained on the source domain and is directly applied to target
domains, without any further training. Our experimental results suggest that:
the benefits from using visual groundings transfer to text in a domain similar
to the training domain but fail to transfer to remote domains. Further, we
conduct data and result analysis; we find that the lexicon overlap between the
source domain and the target domain is the most important factor in the
transferability of VC-PCFG.
Related papers
- Cross-Domain Policy Adaptation by Capturing Representation Mismatch [53.087413751430255]
It is vital to learn effective policies that can be transferred to different domains with dynamics discrepancies in reinforcement learning (RL)
In this paper, we consider dynamics adaptation settings where there exists dynamics mismatch between the source domain and the target domain.
We perform representation learning only in the target domain and measure the representation deviations on the transitions from the source domain.
arXiv Detail & Related papers (2024-05-24T09:06:12Z) - Adapting to Distribution Shift by Visual Domain Prompt Generation [34.19066857066073]
We adapt a model at test-time using a few unlabeled data to address distribution shifts.
We build a knowledge bank to learn the transferable knowledge from source domains.
The proposed method outperforms previous work on 5 large-scale benchmarks including WILDS and DomainNet.
arXiv Detail & Related papers (2024-05-05T02:44:04Z) - Phrase Grounding-based Style Transfer for Single-Domain Generalized
Object Detection [109.58348694132091]
Single-domain generalized object detection aims to enhance a model's generalizability to multiple unseen target domains.
This is a practical yet challenging task as it requires the model to address domain shift without incorporating target domain data into training.
We propose a novel phrase grounding-based style transfer approach for the task.
arXiv Detail & Related papers (2024-02-02T10:48:43Z) - Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised
Domain Adaptation [88.5448806952394]
We consider unsupervised domain adaptation (UDA), where labeled data from a source domain and unlabeled data from a target domain are used to learn a classifier for the target domain.
We show that contrastive pre-training, which learns features on unlabeled source and target data and then fine-tunes on labeled source data, is competitive with strong UDA methods.
arXiv Detail & Related papers (2022-04-01T16:56:26Z) - A Broad Study of Pre-training for Domain Generalization and Adaptation [69.38359595534807]
We provide a broad study and in-depth analysis of pre-training for domain adaptation and generalization.
We observe that simply using a state-of-the-art backbone outperforms existing state-of-the-art domain adaptation baselines.
arXiv Detail & Related papers (2022-03-22T15:38:36Z) - Multilevel Knowledge Transfer for Cross-Domain Object Detection [26.105283273950942]
Domain shift is a well known problem where a model trained on a particular domain (source) does not perform well when exposed to samples from a different domain (target)
In this work, we address the domain shift problem for the object detection task.
Our approach relies on gradually removing the domain shift between the source and the target domains.
arXiv Detail & Related papers (2021-08-02T15:24:40Z) - Physically-Constrained Transfer Learning through Shared Abundance Space
for Hyperspectral Image Classification [14.840925517957258]
We propose a new transfer learning scheme to bridge the gap between the source and target domains.
The proposed method is referred to as physically-constrained transfer learning through shared abundance space.
arXiv Detail & Related papers (2020-08-19T17:41:37Z) - Off-Dynamics Reinforcement Learning: Training for Transfer with Domain
Classifiers [138.68213707587822]
We propose a simple, practical, and intuitive approach for domain adaptation in reinforcement learning.
We show that we can achieve this goal by compensating for the difference in dynamics by modifying the reward function.
Our approach is applicable to domains with continuous states and actions and does not require learning an explicit model of the dynamics.
arXiv Detail & Related papers (2020-06-24T17:47:37Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.