Knowledge Distillation Approaches for Accurate and Efficient Recommender System
- URL: http://arxiv.org/abs/2407.13952v1
- Date: Fri, 19 Jul 2024 00:01:18 GMT
- Title: Knowledge Distillation Approaches for Accurate and Efficient Recommender System
- Authors: SeongKu Kang,
- Abstract summary: This dissertation is devoted to developing knowledge distillation methods for recommender systems.
The proposed methods are categorized according to their knowledge sources.
We present a new learning framework that compresses the ranking knowledge of heterogeneous recommendation models.
- Score: 8.27119606183475
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Despite its breakthrough in classification problems, Knowledge distillation (KD) to recommendation models and ranking problems has not been studied well in the previous literature. This dissertation is devoted to developing knowledge distillation methods for recommender systems to fully improve the performance of a compact model. We propose novel distillation methods designed for recommender systems. The proposed methods are categorized according to their knowledge sources as follows: (1) Latent knowledge: we propose two methods that transfer latent knowledge of user/item representation. They effectively transfer knowledge of niche tastes with a balanced distillation strategy that prevents the KD process from being biased towards a small number of large preference groups. Also, we propose a new method that transfers user/item relations in the representation space. The proposed method selectively transfers essential relations considering the limited capacity of the compact model. (2) Ranking knowledge: we propose three methods that transfer ranking knowledge from the recommendation results. They formulate the KD process as a ranking matching problem and transfer the knowledge via a listwise learning strategy. Further, we present a new learning framework that compresses the ranking knowledge of heterogeneous recommendation models. The proposed framework is developed to ease the computational burdens of model ensemble which is a dominant solution for many recommendation applications. We validate the benefit of our proposed methods and frameworks through extensive experiments. To summarize, this dissertation sheds light on knowledge distillation approaches for a better accuracy-efficiency trade-off of the recommendation models.
Related papers
- Pre-trained Language Model and Knowledge Distillation for Lightweight Sequential Recommendation [51.25461871988366]
We propose a sequential recommendation algorithm based on a pre-trained language model and knowledge distillation.
The proposed algorithm enhances recommendation accuracy and provide timely recommendation services.
arXiv Detail & Related papers (2024-09-23T08:39:07Z) - Adaptive Explicit Knowledge Transfer for Knowledge Distillation [17.739979156009696]
We show that the performance of logit-based knowledge distillation can be improved by effectively delivering the probability distribution for the non-target classes from the teacher model.
We propose a new loss that enables the student to learn explicit knowledge along with implicit knowledge in an adaptive manner.
Experimental results demonstrate that the proposed method, called adaptive explicit knowledge transfer (AEKT) method, achieves improved performance compared to the state-of-the-art KD methods.
arXiv Detail & Related papers (2024-09-03T07:42:59Z) - Knowledge Editing in Language Models via Adapted Direct Preference Optimization [50.616875565173274]
Large Language Models (LLMs) can become outdated over time.
Knowledge Editing aims to overcome this challenge using weight updates that do not require expensive retraining.
arXiv Detail & Related papers (2024-06-14T11:02:21Z) - Enhancing Scalability in Recommender Systems through Lottery Ticket
Hypothesis and Knowledge Distillation-based Neural Network Pruning [1.3654846342364308]
This study introduces an innovative approach aimed at the efficient pruning of neural networks, with a particular focus on their deployment on edge devices.
Our method involves the integration of the Lottery Ticket Hypothesis (LTH) with the Knowledge Distillation (KD) framework, resulting in the formulation of three distinct pruning models.
Gratifyingly, our approaches yielded a GPU computation-power reduction of up to 66.67%.
arXiv Detail & Related papers (2024-01-19T04:17:50Z) - Unbiased Knowledge Distillation for Recommendation [66.82575287129728]
Knowledge distillation (KD) has been applied in recommender systems (RS) to reduce inference latency.
Traditional solutions first train a full teacher model from the training data, and then transfer its knowledge to supervise the learning of a compact student model.
We find such a standard distillation paradigm would incur serious bias issue -- popular items are more heavily recommended after the distillation.
arXiv Detail & Related papers (2022-11-27T05:14:03Z) - A Closer Look at Knowledge Distillation with Features, Logits, and
Gradients [81.39206923719455]
Knowledge distillation (KD) is a substantial strategy for transferring learned knowledge from one neural network model to another.
This work provides a new perspective to motivate a set of knowledge distillation strategies by approximating the classical KL-divergence criteria with different knowledge sources.
Our analysis indicates that logits are generally a more efficient knowledge source and suggests that having sufficient feature dimensions is crucial for the model design.
arXiv Detail & Related papers (2022-03-18T21:26:55Z) - Dual Correction Strategy for Ranking Distillation in Top-N Recommender System [22.37864671297929]
This paper presents Dual Correction strategy for Knowledge Distillation (DCD)
DCD transfers the ranking information from the teacher model to the student model in a more efficient manner.
Our experiments show that the proposed method outperforms the state-of-the-art baselines.
arXiv Detail & Related papers (2021-09-08T07:00:45Z) - Residual Knowledge Distillation [96.18815134719975]
This work proposes Residual Knowledge Distillation (RKD), which further distills the knowledge by introducing an assistant (A)
In this way, S is trained to mimic the feature maps of T, and A aids this process by learning the residual error between them.
Experiments show that our approach achieves appealing results on popular classification datasets, CIFAR-100 and ImageNet.
arXiv Detail & Related papers (2020-02-21T07:49:26Z) - Learning From Multiple Experts: Self-paced Knowledge Distillation for
Long-tailed Classification [106.08067870620218]
We propose a self-paced knowledge distillation framework, termed Learning From Multiple Experts (LFME)
We refer to these models as 'Experts', and the proposed LFME framework aggregates the knowledge from multiple 'Experts' to learn a unified student model.
We conduct extensive experiments and demonstrate that our method is able to achieve superior performances compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-01-06T12:57:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.