$n$-Reference Transfer Learning for Saliency Prediction
- URL: http://arxiv.org/abs/2007.05104v1
- Date: Thu, 9 Jul 2020 23:20:44 GMT
- Title: $n$-Reference Transfer Learning for Saliency Prediction
- Authors: Yan Luo, Yongkang Wong, Mohan S. Kankanhalli, and Qi Zhao
- Abstract summary: We propose a few-shot transfer learning paradigm for saliency prediction.
The proposed framework is gradient-based and model-agnostic.
The results show that the proposed framework achieves a significant performance improvement.
- Score: 73.17061116358036
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Benefiting from deep learning research and large-scale datasets, saliency
prediction has achieved significant success in the past decade. However, it
still remains challenging to predict saliency maps on images in new domains
that lack sufficient data for data-hungry models. To solve this problem, we
propose a few-shot transfer learning paradigm for saliency prediction, which
enables efficient transfer of knowledge learned from the existing large-scale
saliency datasets to a target domain with limited labeled examples.
Specifically, very few target domain examples are used as the reference to
train a model with a source domain dataset such that the training process can
converge to a local minimum in favor of the target domain. Then, the learned
model is further fine-tuned with the reference. The proposed framework is
gradient-based and model-agnostic. We conduct comprehensive experiments and
ablation study on various source domain and target domain pairs. The results
show that the proposed framework achieves a significant performance
improvement. The code is publicly available at
\url{https://github.com/luoyan407/n-reference}.
Related papers
- SiamSeg: Self-Training with Contrastive Learning for Unsupervised Domain Adaptation Semantic Segmentation in Remote Sensing [14.007392647145448]
UDA enables models to learn from unlabeled target domain data while training on labeled source domain data.
We propose integrating contrastive learning into UDA, enhancing the model's capacity to capture semantic information.
Our SimSeg method outperforms existing approaches, achieving state-of-the-art results.
arXiv Detail & Related papers (2024-10-17T11:59:39Z) - Forget Less, Count Better: A Domain-Incremental Self-Distillation
Learning Benchmark for Lifelong Crowd Counting [51.44987756859706]
Off-the-shelf methods have some drawbacks to handle multiple domains.
Lifelong Crowd Counting aims at alleviating the catastrophic forgetting and improving the generalization ability.
arXiv Detail & Related papers (2022-05-06T15:37:56Z) - Low-confidence Samples Matter for Domain Adaptation [47.552605279925736]
Domain adaptation (DA) aims to transfer knowledge from a label-rich source domain to a related but label-scarce target domain.
We propose a novel contrastive learning method by processing low-confidence samples.
We evaluate the proposed method in both unsupervised and semi-supervised DA settings.
arXiv Detail & Related papers (2022-02-06T15:45:45Z) - Unified Instance and Knowledge Alignment Pretraining for Aspect-based
Sentiment Analysis [96.53859361560505]
Aspect-based Sentiment Analysis (ABSA) aims to determine the sentiment polarity towards an aspect.
There always exists severe domain shift between the pretraining and downstream ABSA datasets.
We introduce a unified alignment pretraining framework into the vanilla pretrain-finetune pipeline.
arXiv Detail & Related papers (2021-10-26T04:03:45Z) - Few-shot Image Generation with Elastic Weight Consolidation [53.556446614013105]
Few-shot image generation seeks to generate more data of a given domain, with only few available training examples.
We adapt a pretrained model, without introducing any additional parameters, to the few examples of the target domain.
We demonstrate the effectiveness of our algorithm by generating high-quality results of different target domains.
arXiv Detail & Related papers (2020-12-04T18:57:13Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z) - Learning to Cluster under Domain Shift [20.00056591000625]
In this work we address the problem of transferring knowledge from a source to a target domain when both source and target data have no annotations.
Inspired by recent works on deep clustering, our approach leverages information from data gathered from multiple source domains.
We show that our method is able to automatically discover relevant semantic information even in presence of few target samples.
arXiv Detail & Related papers (2020-08-11T12:03:01Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.