Towards Distillation-Resistant Large Language Models: An Information-Theoretic Perspective
- URL: http://arxiv.org/abs/2602.03396v1
- Date: Tue, 03 Feb 2026 11:16:59 GMT
- Title: Towards Distillation-Resistant Large Language Models: An Information-Theoretic Perspective
- Authors: Hao Fang, Tianyi Zhang, Tianqu Zhuang, Jiawei Kong, Kuofeng Gao, Bin Chen, Leqi Liang, Shu-Tao Xia, Ke Xu,
- Abstract summary: Existing defenses focus exclusively on text-based distillation, leaving the important logit-based distillation largely unexplored.<n>We characterize distillation-relevant information in teacher outputs using the conditional mutual information (CMI) between teacher logits and input queries conditioned on ground-truth labels.<n>We derive a CMI-inspired anti-distillation objective to optimize this transformation, which effectively removes distillation-relevant information while preserving output utility.
- Score: 52.25797439810419
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Proprietary large language models (LLMs) embody substantial economic value and are generally exposed only as black-box APIs, yet adversaries can still exploit their outputs to extract knowledge via distillation. Existing defenses focus exclusively on text-based distillation, leaving the important logit-based distillation largely unexplored. In this work, we analyze this problem and present an effective solution from an information-theoretic perspective. We characterize distillation-relevant information in teacher outputs using the conditional mutual information (CMI) between teacher logits and input queries conditioned on ground-truth labels. This quantity captures contextual information beneficial for model extraction, motivating us to defend distillation via CMI minimization. Guided by our theoretical analysis, we propose learning a transformation matrix that purifies the original outputs to enhance distillation resistance. We further derive a CMI-inspired anti-distillation objective to optimize this transformation, which effectively removes distillation-relevant information while preserving output utility. Extensive experiments across multiple LLMs and strong distillation algorithms demonstrate that the proposed method significantly degrades distillation performance while preserving task accuracy, effectively protecting models' intellectual property.
Related papers
- DOGe: Defensive Output Generation for LLM Protection Against Knowledge Distillation [49.58082402742583]
Large Language Models (LLMs) represent substantial intellectual and economic investments.<n>LLMs can inadvertently facilitate model imitation via knowledge distillation (KD)<n>This paper introduces an effective and efficient Defensive Output Generation (DOGe) strategy.
arXiv Detail & Related papers (2025-05-26T04:31:38Z) - Quantification of Large Language Model Distillation [22.680566179355335]
We propose a framework to evaluate and quantify model distillation.<n>Our method addresses two key aspects: (1) Identifying identity cognition contradictions to assess discrepancies in how models perceive and represent identity-related information, and (2) Analyzing multi-granularity response similarities across models to measure the extent of homogenization.
arXiv Detail & Related papers (2025-01-22T03:57:52Z) - Multi-perspective Contrastive Logit Distillation [12.589031892370809]
We introduce a novel and efficient logit distillation method, Multi-perspective Contrastive Logit Distillation (MCLD), which substantially improves the performance and efficacy of logit distillation.<n>MCLD attains state-of-the-art performance in image classification, and transfer learning tasks across multiple datasets, including CIFAR-100, ImageNet, Tiny-ImageNet, and STL-10.
arXiv Detail & Related papers (2024-11-16T04:08:41Z) - Knowledge Distillation via Query Selection for Detection Transformer [25.512519971607237]
This paper addresses the challenge of compressing DETR by leveraging knowledge distillation.
A critical aspect of DETRs' performance is their reliance on queries to interpret object representations accurately.
Our visual analysis indicates that hard-negative queries, focusing on foreground elements, are crucial for enhancing distillation outcomes.
arXiv Detail & Related papers (2024-09-10T11:49:28Z) - Knowledge Distillation with Refined Logits [31.205248790623703]
We introduce Refined Logit Distillation (RLD) to address the limitations of current logit distillation methods.<n>Our approach is motivated by the observation that even high-performing teacher models can make incorrect predictions.<n>Our method can effectively eliminate misleading information from the teacher while preserving crucial class correlations.
arXiv Detail & Related papers (2024-08-14T17:59:32Z) - Distill Gold from Massive Ores: Bi-level Data Pruning towards Efficient Dataset Distillation [96.92250565207017]
We study the data efficiency and selection for the dataset distillation task.
By re-formulating the dynamics of distillation, we provide insight into the inherent redundancy in the real dataset.
We find the most contributing samples based on their causal effects on the distillation.
arXiv Detail & Related papers (2023-05-28T06:53:41Z) - Explicit and Implicit Knowledge Distillation via Unlabeled Data [5.702176304876537]
We propose an efficient unlabeled sample selection method to replace high computational generators.
We also propose a class-dropping mechanism to suppress the label noise caused by the data domain shifts.
Experimental results show that our method can quickly converge and obtain higher accuracy than other state-of-the-art methods.
arXiv Detail & Related papers (2023-02-17T09:10:41Z) - Mind the Gap in Distilling StyleGANs [100.58444291751015]
StyleGAN family is one of the most popular Generative Adversarial Networks (GANs) for unconditional generation.
This paper provides a comprehensive study of distilling from the popular StyleGAN-like architecture.
arXiv Detail & Related papers (2022-08-18T14:18:29Z) - Localization Distillation for Object Detection [134.12664548771534]
Previous knowledge distillation (KD) methods for object detection mostly focus on feature imitation instead of mimicking the classification logits.
We present a novel localization distillation (LD) method which can efficiently transfer the localization knowledge from the teacher to the student.
We show that logit mimicking can outperform feature imitation and the absence of localization distillation is a critical reason for why logit mimicking underperforms for years.
arXiv Detail & Related papers (2022-04-12T17:14:34Z) - Why distillation helps: a statistical perspective [69.90148901064747]
Knowledge distillation is a technique for improving the performance of a simple "student" model.
While this simple approach has proven widely effective, a basic question remains unresolved: why does distillation help?
We show how distillation complements existing negative mining techniques for extreme multiclass retrieval.
arXiv Detail & Related papers (2020-05-21T01:49:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.