Data-Free Knowledge Transfer: A Survey
- URL: http://arxiv.org/abs/2112.15278v1
- Date: Fri, 31 Dec 2021 03:39:42 GMT
- Title: Data-Free Knowledge Transfer: A Survey
- Authors: Yuang Liu, Wei Zhang, Jun Wang, Jianyong Wang
- Abstract summary: knowledge distillation (KD) and domain adaptation (DA) are proposed and become research highlights.
They both aim to transfer useful information from a well-trained model with original training data.
Recently, the data-free knowledge transfer paradigm has attracted appealing attention.
- Score: 13.335198869928167
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the last decade, many deep learning models have been well trained and made
a great success in various fields of machine intelligence, especially for
computer vision and natural language processing. To better leverage the
potential of these well-trained models in intra-domain or cross-domain transfer
learning situations, knowledge distillation (KD) and domain adaptation (DA) are
proposed and become research highlights. They both aim to transfer useful
information from a well-trained model with original training data. However, the
original data is not always available in many cases due to privacy, copyright
or confidentiality. Recently, the data-free knowledge transfer paradigm has
attracted appealing attention as it deals with distilling valuable knowledge
from well-trained models without requiring to access to the training data. In
particular, it mainly consists of the data-free knowledge distillation (DFKD)
and source data-free domain adaptation (SFDA). On the one hand, DFKD aims to
transfer the intra-domain knowledge of original data from a cumbersome teacher
network to a compact student network for model compression and efficient
inference. On the other hand, the goal of SFDA is to reuse the cross-domain
knowledge stored in a well-trained source model and adapt it to a target
domain. In this paper, we provide a comprehensive survey on data-free knowledge
transfer from the perspectives of knowledge distillation and unsupervised
domain adaptation, to help readers have a better understanding of the current
research status and ideas. Applications and challenges of the two areas are
briefly reviewed, respectively. Furthermore, we provide some insights to the
subject of future research.
Related papers
- Adapting to Distribution Shift by Visual Domain Prompt Generation [34.19066857066073]
We adapt a model at test-time using a few unlabeled data to address distribution shifts.
We build a knowledge bank to learn the transferable knowledge from source domains.
The proposed method outperforms previous work on 5 large-scale benchmarks including WILDS and DomainNet.
arXiv Detail & Related papers (2024-05-05T02:44:04Z) - AuG-KD: Anchor-Based Mixup Generation for Out-of-Domain Knowledge Distillation [33.208860361882095]
Data-Free Knowledge Distillation (DFKD) methods have emerged as direct solutions.
However, simply adopting models derived from DFKD for real-world applications suffers significant performance degradation.
We propose a simple but effective method AuG-KD to selectively transfer teachers' appropriate knowledge.
arXiv Detail & Related papers (2024-03-11T03:34:14Z) - Direct Distillation between Different Domains [97.39470334253163]
We propose a new one-stage method dubbed Direct Distillation between Different Domains" (4Ds)
We first design a learnable adapter based on the Fourier transform to separate the domain-invariant knowledge from the domain-specific knowledge.
We then build a fusion-activation mechanism to transfer the valuable domain-invariant knowledge to the student network.
arXiv Detail & Related papers (2024-01-12T02:48:51Z) - Bridged-GNN: Knowledge Bridge Learning for Effective Knowledge Transfer [65.42096702428347]
Graph Neural Networks (GNNs) aggregate information from neighboring nodes.
Knowledge Bridge Learning (KBL) learns a knowledge-enhanced posterior distribution for target domains.
Bridged-GNN includes an Adaptive Knowledge Retrieval module to build Bridged-Graph and a Graph Knowledge Transfer module.
arXiv Detail & Related papers (2023-08-18T12:14:51Z) - Selective Knowledge Sharing for Privacy-Preserving Federated
Distillation without A Good Teacher [52.2926020848095]
Federated learning is vulnerable to white-box attacks and struggles to adapt to heterogeneous clients.
This paper proposes a selective knowledge sharing mechanism for FD, termed Selective-FD.
arXiv Detail & Related papers (2023-04-04T12:04:19Z) - A Concise Review of Transfer Learning [1.5771347525430772]
Transfer learning aims to boost the performance of a target learner by applying another related source data.
Traditional machine learning and data mining techniques assume that the training and testing data lie from the same feature space and distribution.
arXiv Detail & Related papers (2021-04-05T20:34:55Z) - Source Data-absent Unsupervised Domain Adaptation through Hypothesis
Transfer and Labeling Transfer [137.36099660616975]
Unsupervised adaptation adaptation (UDA) aims to transfer knowledge from a related but different well-labeled source domain to a new unlabeled target domain.
Most existing UDA methods require access to the source data, and thus are not applicable when the data are confidential and not shareable due to privacy concerns.
This paper aims to tackle a realistic setting with only a classification model available trained over, instead of accessing to the source data.
arXiv Detail & Related papers (2020-12-14T07:28:50Z) - Dual-Teacher: Integrating Intra-domain and Inter-domain Teachers for
Annotation-efficient Cardiac Segmentation [65.81546955181781]
We propose a novel semi-supervised domain adaptation approach, namely Dual-Teacher.
The student model learns the knowledge of unlabeled target data and labeled source data by two teacher models.
We demonstrate that our approach is able to concurrently utilize unlabeled data and cross-modality data with superior performance.
arXiv Detail & Related papers (2020-07-13T10:00:44Z) - On the application of transfer learning in prognostics and health
management [0.0]
Data availability has encouraged researchers and industry practitioners to rely on data-based machine learning.
Deep learning, models for fault diagnostics and prognostics more than ever.
These models provide unique advantages, however, their performance is heavily dependent on the training data and how well that data represents the test data.
transfer learning is an approach that can remedy this issue by keeping portions of what is learned from previous training and transferring them to the new application.
arXiv Detail & Related papers (2020-07-03T23:35:18Z) - Domain Adaption for Knowledge Tracing [65.86619804954283]
We propose a novel adaptable framework, namely knowledge tracing (AKT) to address the DAKT problem.
For the first aspect, we incorporate the educational characteristics (e.g., slip, guess, question texts) based on the deep knowledge tracing (DKT) to obtain a good performed knowledge tracing model.
For the second aspect, we propose and adopt three domain adaptation processes. First, we pre-train an auto-encoder to select useful source instances for target model training.
arXiv Detail & Related papers (2020-01-14T15:04:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.