Cross-Domain Transfer Learning with CoRTe: Consistent and Reliable
Transfer from Black-Box to Lightweight Segmentation Model
- URL: http://arxiv.org/abs/2402.13122v1
- Date: Tue, 20 Feb 2024 16:35:14 GMT
- Title: Cross-Domain Transfer Learning with CoRTe: Consistent and Reliable
Transfer from Black-Box to Lightweight Segmentation Model
- Authors: Claudia Cuttano, Antonio Tavera, Fabio Cermelli, Giuseppe Averta,
Barbara Caputo
- Abstract summary: CoRTe is a pseudo-labelling function that extracts reliable knowledge from a black-box source model.
We benchmark CoRTe on two synthetic-to-real settings, demonstrating remarkable results when using black-box models to transfer knowledge on lightweight models for a target data distribution.
- Score: 25.3403116022412
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many practical applications require training of semantic segmentation models
on unlabelled datasets and their execution on low-resource hardware.
Distillation from a trained source model may represent a solution for the first
but does not account for the different distribution of the training data.
Unsupervised domain adaptation (UDA) techniques claim to solve the domain
shift, but in most cases assume the availability of the source data or an
accessible white-box source model, which in practical applications are often
unavailable for commercial and/or safety reasons. In this paper, we investigate
a more challenging setting in which a lightweight model has to be trained on a
target unlabelled dataset for semantic segmentation, under the assumption that
we have access only to black-box source model predictions. Our method, named
CoRTe, consists of (i) a pseudo-labelling function that extracts reliable
knowledge from the black-box source model using its relative confidence, (ii) a
pseudo label refinement method to retain and enhance the novel information
learned by the student model on the target data, and (iii) a consistent
training of the model using the extracted pseudo labels. We benchmark CoRTe on
two synthetic-to-real settings, demonstrating remarkable results when using
black-box models to transfer knowledge on lightweight models for a target data
distribution.
Related papers
- Incremental Pseudo-Labeling for Black-Box Unsupervised Domain Adaptation [14.596659424489223]
We propose a novel approach that incrementally selects high-confidence pseudo-labels to improve the generalization ability of the target model.
Experimental results demonstrate that the proposed method achieves state-of-the-art black-box unsupervised domain adaptation performance on three benchmark datasets.
arXiv Detail & Related papers (2024-05-26T05:41:42Z) - Building a Winning Team: Selecting Source Model Ensembles using a
Submodular Transferability Estimation Approach [20.86345962679122]
Estimating the transferability of publicly available pretrained models to a target task has assumed an important place for transfer learning tasks.
We propose a novel Optimal tranSport-based suBmOdular tRaNsferability metric (OSBORN) to estimate the transferability of an ensemble of models to a downstream task.
arXiv Detail & Related papers (2023-09-05T17:57:31Z) - Curriculum Guided Domain Adaptation in the Dark [0.0]
Domain adaptation in the dark aims to adapt a black-box source trained model to an unlabeled target domain without access to source data or source model parameters.
We present Curriculum Adaptation for Black-Box (CABB) which provides a curriculum guided adaptation approach to gradually train the target model.
Our method utilizes co-training of a dual-branch network to suppress error accumulation resulting from confirmation bias.
arXiv Detail & Related papers (2023-08-02T05:47:56Z) - DREAM: Domain-free Reverse Engineering Attributes of Black-box Model [51.37041886352823]
We propose a new problem of Domain-agnostic Reverse Engineering the Attributes of a black-box target model.
We learn a domain-agnostic model to infer the attributes of a target black-box model with unknown training data.
arXiv Detail & Related papers (2023-07-20T16:25:58Z) - Black-box Source-free Domain Adaptation via Two-stage Knowledge
Distillation [8.224874938178633]
Source-free domain adaptation aims to adapt deep neural networks using only pre-trained source models and target data.
accessing the source model still has a potential concern about leaking the source data, which reveals the patient's privacy.
We study the challenging but practical problem: black-box source-free domain adaptation where only the outputs of the source model and target data are available.
arXiv Detail & Related papers (2023-05-13T10:00:24Z) - TRAK: Attributing Model Behavior at Scale [79.56020040993947]
We present TRAK (Tracing with Randomly-trained After Kernel), a data attribution method that is both effective and computationally tractable for large-scale, differenti models.
arXiv Detail & Related papers (2023-03-24T17:56:22Z) - RAIN: RegulArization on Input and Network for Black-Box Domain
Adaptation [80.03883315743715]
Source-free domain adaptation transits the source-trained model towards target domain without exposing the source data.
This paradigm is still at risk of data leakage due to adversarial attacks on the source model.
We propose a novel approach named RAIN (RegulArization on Input and Network) for Black-Box domain adaptation from both input-level and network-level regularization.
arXiv Detail & Related papers (2022-08-22T18:18:47Z) - On Universal Black-Box Domain Adaptation [53.7611757926922]
We study an arguably least restrictive setting of domain adaptation in a sense of practical deployment.
Only the interface of source model is available to the target domain, and where the label-space relations between the two domains are allowed to be different and unknown.
We propose to unify them into a self-training framework, regularized by consistency of predictions in local neighborhoods of target samples.
arXiv Detail & Related papers (2021-04-10T02:21:09Z) - Distill and Fine-tune: Effective Adaptation from a Black-box Source
Model [138.12678159620248]
Unsupervised domain adaptation (UDA) aims to transfer knowledge in previous related labeled datasets (source) to a new unlabeled dataset (target)
We propose a novel two-step adaptation framework called Distill and Fine-tune (Dis-tune)
arXiv Detail & Related papers (2021-04-04T05:29:05Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.