Efficient Sub-structured Knowledge Distillation
- URL: http://arxiv.org/abs/2203.04825v1
- Date: Wed, 9 Mar 2022 15:56:49 GMT
- Title: Efficient Sub-structured Knowledge Distillation
- Authors: Wenye Lin, Yangming Li, Lemao Liu, Shuming Shi, Hai-tao Zheng
- Abstract summary: We propose an approach that is much simpler in its formulation and far more efficient for training than existing approaches.
We transfer the knowledge from a teacher model to its student model by locally matching their predictions on all sub-structures, instead of the whole output space.
- Score: 52.5931565465661
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Structured prediction models aim at solving a type of problem where the
output is a complex structure, rather than a single variable. Performing
knowledge distillation for such models is not trivial due to their
exponentially large output space. In this work, we propose an approach that is
much simpler in its formulation and far more efficient for training than
existing approaches. Specifically, we transfer the knowledge from a teacher
model to its student model by locally matching their predictions on all
sub-structures, instead of the whole output space. In this manner, we avoid
adopting some time-consuming techniques like dynamic programming (DP) for
decoding output structures, which permits parallel computation and makes the
training process even faster in practice. Besides, it encourages the student
model to better mimic the internal behavior of the teacher model. Experiments
on two structured prediction tasks demonstrate that our approach outperforms
previous methods and halves the time cost for one training epoch.
Related papers
- Majority Kernels: An Approach to Leverage Big Model Dynamics for Efficient Small Model Training [32.154166415680066]
Methods like distillation, compression or quantization help leverage the highly performant large models to induce smaller performant ones.
This paper explores the hypothesis that a single training run can simultaneously train a larger model for performance and derive a smaller model for deployment.
arXiv Detail & Related papers (2024-02-07T17:07:41Z) - Manipulating Predictions over Discrete Inputs in Machine Teaching [43.914943603238996]
This paper focuses on machine teaching in the discrete domain, specifically on manipulating student models' predictions based on the goals of teachers via changing the training data efficiently.
We formulate this task as a optimization problem and solve it by proposing an iterative searching algorithm.
Our algorithm demonstrates significant numerical merit in the scenarios where a teacher attempts at correcting erroneous predictions to improve the student's models, or maliciously manipulating the model to misclassify some specific samples to the target class aligned with his personal profits.
arXiv Detail & Related papers (2024-01-31T14:23:51Z) - DynaLay: An Introspective Approach to Dynamic Layer Selection for Deep
Networks [0.0]
We introduce textbfDynaLay, an alternative architecture that features a decision-making agent to adaptively select the most suitable layers for processing each input.
DynaLay reevaluates more complex inputs during inference, adjusting the computational effort to optimize both performance and efficiency.
Our experiments demonstrate that DynaLay achieves accuracy comparable to conventional deep models while significantly reducing computational demands.
arXiv Detail & Related papers (2023-12-20T05:55:05Z) - PILOT: A Pre-Trained Model-Based Continual Learning Toolbox [71.63186089279218]
This paper introduces a pre-trained model-based continual learning toolbox known as PILOT.
On the one hand, PILOT implements some state-of-the-art class-incremental learning algorithms based on pre-trained models, such as L2P, DualPrompt, and CODA-Prompt.
On the other hand, PILOT fits typical class-incremental learning algorithms within the context of pre-trained models to evaluate their effectiveness.
arXiv Detail & Related papers (2023-09-13T17:55:11Z) - BOOT: Data-free Distillation of Denoising Diffusion Models with
Bootstrapping [64.54271680071373]
Diffusion models have demonstrated excellent potential for generating diverse images.
Knowledge distillation has been recently proposed as a remedy that can reduce the number of inference steps to one or a few.
We present a novel technique called BOOT, that overcomes limitations with an efficient data-free distillation algorithm.
arXiv Detail & Related papers (2023-06-08T20:30:55Z) - Efficient Prompting via Dynamic In-Context Learning [76.83516913735072]
We propose DynaICL, a recipe for efficient prompting with black-box generalist models.
DynaICL dynamically allocates in-context examples according to the input complexity and the computational budget.
We find that DynaICL saves up to 46% token budget compared to the common practice that allocates the same number of in-context examples to each input.
arXiv Detail & Related papers (2023-05-18T17:58:31Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Dense Unsupervised Learning for Video Segmentation [49.46930315961636]
We present a novel approach to unsupervised learning for video object segmentation (VOS)
Unlike previous work, our formulation allows to learn dense feature representations directly in a fully convolutional regime.
Our approach exceeds the segmentation accuracy of previous work despite using significantly less training data and compute power.
arXiv Detail & Related papers (2021-11-11T15:15:11Z) - A Practical Incremental Method to Train Deep CTR Models [37.54660958085938]
We introduce a practical incremental method to train deep CTR models, which consists of three decoupled modules.
Our method can achieve comparable performance to the conventional batch mode training with much better training efficiency.
arXiv Detail & Related papers (2020-09-04T12:35:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.