Transferability Estimation for Semantic Segmentation Task
- URL: http://arxiv.org/abs/2109.15242v3
- Date: Thu, 29 Feb 2024 06:24:13 GMT
- Title: Transferability Estimation for Semantic Segmentation Task
- Authors: Yang Tan, Yang Li, Shao-Lun Huang
- Abstract summary: We extend the recent transferability metric OTCE score to the semantic segmentation task.
The challenge in applying the OTCE score is the high dimensional segmentation output, which is difficult to find the optimal coupling between so many pixels under an acceptable cost.
Experimental evaluation on Cityscapes, BDD100K and GTA5 datasets demonstrates that the OTCE score highly correlates with the transfer performance.
- Score: 20.07223947190349
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transferability estimation is a fundamental problem in transfer learning to
predict how good the performance is when transferring a source model (or source
task) to a target task. With the guidance of transferability score, we can
efficiently select the highly transferable source models without performing the
real transfer in practice. Recent analytical transferability metrics are mainly
designed for image classification problem, and currently there is no specific
investigation for the transferability estimation of semantic segmentation task,
which is an essential problem in autonomous driving, medical image analysis,
etc. Consequently, we further extend the recent analytical transferability
metric OTCE (Optimal Transport based Conditional Entropy) score to the semantic
segmentation task. The challenge in applying the OTCE score is the high
dimensional segmentation output, which is difficult to find the optimal
coupling between so many pixels under an acceptable computation cost. Thus we
propose to randomly sample N pixels for computing OTCE score and take the
expectation over K repetitions as the final transferability score. Experimental
evaluation on Cityscapes, BDD100K and GTA5 datasets demonstrates that the OTCE
score highly correlates with the transfer performance.
Related papers
- Towards Estimating Transferability using Hard Subsets [25.86053764521497]
We propose HASTE, a new strategy to estimate the transferability of a source model to a particular target task using only a harder subset of target data.
We show that HASTE can be used with any existing transferability metric to improve their reliability.
Our experimental results across multiple source model architectures, target datasets, and transfer learning tasks show that HASTE modified metrics are consistently better or on par with the state of the art transferability metrics.
arXiv Detail & Related papers (2023-01-17T14:50:18Z) - Transferability Estimation Based On Principal Gradient Expectation [68.97403769157117]
Cross-task transferability is compatible with transferred results while keeping self-consistency.
Existing transferability metrics are estimated on the particular model by conversing source and target tasks.
We propose Principal Gradient Expectation (PGE), a simple yet effective method for assessing transferability across tasks.
arXiv Detail & Related papers (2022-11-29T15:33:02Z) - Can You Label Less by Using Out-of-Domain Data? Active & Transfer
Learning with Few-shot Instructions [58.69255121795761]
We propose a novel Active Transfer Few-shot Instructions (ATF) approach which requires no fine-tuning.
ATF leverages the internal linguistic knowledge of pre-trained language models (PLMs) to facilitate the transfer of information.
We show that annotation of just a few target-domain samples via active learning can be beneficial for transfer, but the impact diminishes with more annotation effort.
arXiv Detail & Related papers (2022-11-21T19:03:31Z) - Transferability-Guided Cross-Domain Cross-Task Transfer Learning [21.812715282796255]
We propose two novel transferability metrics F-OTCE and JC-OTCE.
F-OTCE estimates transferability by first solving an Optimal Transport problem between source and target distributions.
JC-OTCE improves the transferability of F-OTCE by including label distances in the OT problem.
arXiv Detail & Related papers (2022-07-12T13:06:16Z) - Transferability Estimation using Bhattacharyya Class Separability [37.52588126267552]
Transfer learning is a popular method for leveraging pre-trained models in computer vision.
It is difficult to quantify which pre-trained source models are suitable for a specific target task.
We propose a novel method for quantifying transferability between a source model and a target dataset.
arXiv Detail & Related papers (2021-11-24T20:22:28Z) - Practical Transferability Estimation for Image Classification Tasks [20.07223947190349]
A major challenge is how to make transfereability estimation robust under the cross-domain cross-task settings.
The recently proposed OTCE score solves this problem by considering both domain and task differences.
We propose a practical transferability metric called JC-NCE score that dramatically improves the robustness of the task difference estimation.
arXiv Detail & Related papers (2021-06-19T11:59:11Z) - Frustratingly Easy Transferability Estimation [64.42879325144439]
We propose a simple, efficient, and effective transferability measure named TransRate.
TransRate measures the transferability as the mutual information between the features of target examples extracted by a pre-trained model and labels of them.
Despite its extraordinary simplicity in 10 lines of codes, TransRate performs remarkably well in extensive evaluations on 22 pre-trained models and 16 downstream tasks.
arXiv Detail & Related papers (2021-06-17T10:27:52Z) - OTCE: A Transferability Metric for Cross-Domain Cross-Task
Representations [6.730043708859326]
We propose a transferability metric called Optimal Transport based Conditional Entropy (OTCE)
OTCE characterizes transferability as a combination of domain difference and task difference, and explicitly evaluates them from data in a unified framework.
Experiments on the largest cross-domain dataset DomainNet and Office31 demonstrate that OTCE shows an average of 21% gain in the correlation with the ground truth transfer accuracy.
arXiv Detail & Related papers (2021-03-25T13:51:33Z) - Learning Invariant Representations across Domains and Tasks [81.30046935430791]
We propose a novel Task Adaptation Network (TAN) to solve this unsupervised task transfer problem.
In addition to learning transferable features via domain-adversarial training, we propose a novel task semantic adaptor that uses the learning-to-learn strategy to adapt the task semantics.
TAN significantly increases the recall and F1 score by 5.0% and 7.8% compared to recently strong baselines.
arXiv Detail & Related papers (2021-03-03T11:18:43Z) - Towards Accurate Knowledge Transfer via Target-awareness Representation
Disentanglement [56.40587594647692]
We propose a novel transfer learning algorithm, introducing the idea of Target-awareness REpresentation Disentanglement (TRED)
TRED disentangles the relevant knowledge with respect to the target task from the original source model and used as a regularizer during fine-tuning the target model.
Experiments on various real world datasets show that our method stably improves the standard fine-tuning by more than 2% in average.
arXiv Detail & Related papers (2020-10-16T17:45:08Z) - Exploring and Predicting Transferability across NLP Tasks [115.6278033699853]
We study the transferability between 33 NLP tasks across three broad classes of problems.
Our results show that transfer learning is more beneficial than previously thought.
We also develop task embeddings that can be used to predict the most transferable source tasks for a given target task.
arXiv Detail & Related papers (2020-05-02T09:39:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.