Grounding and Enhancing Informativeness and Utility in Dataset Distillation
- URL: http://arxiv.org/abs/2601.21296v1
- Date: Thu, 29 Jan 2026 05:49:17 GMT
- Title: Grounding and Enhancing Informativeness and Utility in Dataset Distillation
- Authors: Shaobo Wang, Yantai Yang, Guo Chen, Peiru Li, Kaixin Li, Yufa Zhou, Zhaorun Chen, Linfeng Zhang,
- Abstract summary: This paper revisits knowledge distillation-based dataset distillation within a solid theoretical framework.<n>We introduce the concepts of Informativeness and Utility, capturing crucial information within a sample.<n>We then present InfoUtil, a framework that synthesizes informativeness and utility in the distilled dataset.
- Score: 16.992910621801496
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dataset Distillation (DD) seeks to create a compact dataset from a large, real-world dataset. While recent methods often rely on heuristic approaches to balance efficiency and quality, the fundamental relationship between original and synthetic data remains underexplored. This paper revisits knowledge distillation-based dataset distillation within a solid theoretical framework. We introduce the concepts of Informativeness and Utility, capturing crucial information within a sample and essential samples in the training set, respectively. Building on these principles, we define optimal dataset distillation mathematically. We then present InfoUtil, a framework that balances informativeness and utility in synthesizing the distilled dataset. InfoUtil incorporates two key components: (1) game-theoretic informativeness maximization using Shapley Value attribution to extract key information from samples, and (2) principled utility maximization by selecting globally influential samples based on Gradient Norm. These components ensure that the distilled dataset is both informative and utility-optimized. Experiments demonstrate that our method achieves a 6.1\% performance improvement over the previous state-of-the-art approach on ImageNet-1K dataset using ResNet-18.
Related papers
- Boost Self-Supervised Dataset Distillation via Parameterization, Predefined Augmentation, and Approximation [19.552569546864913]
We propose a technique to distill images and their self-supervisedly trained representations into a distilled set.<n>This procedure effectively extracts rich information from real datasets, yielding the distilled sets with enhanced cross-architecture generalizability.<n>In particular, we introduce an innovative parameterization upon images and representations via distinct low-dimensional bases.
arXiv Detail & Related papers (2025-07-29T02:51:56Z) - Generative Dataset Distillation Based on Self-knowledge Distillation [49.20086587208214]
We present a novel generative dataset distillation method that can improve the accuracy of aligning prediction logits.<n>Our approach integrates self-knowledge distillation to achieve more precise distribution matching between the synthetic and original data.<n>Our method outperforms existing state-of-the-art methods, resulting in superior distillation performance.
arXiv Detail & Related papers (2025-01-08T00:43:31Z) - Prioritize Alignment in Dataset Distillation [27.71563788300818]
Existing methods use the agent model to extract information from the target dataset and embed it into the distilled dataset.
We find that existing methods introduce misaligned information in both information extraction and embedding stages.
We propose Prioritize Alignment in dataset Distillation (PAD), which aligns information from the following two perspectives.
arXiv Detail & Related papers (2024-08-06T17:07:28Z) - What is Dataset Distillation Learning? [32.99890244958794]
We study the behavior, representativeness, and point-wise information content of distilled data.
We reveal distilled data cannot serve as a substitute for real data during training.
We provide an framework for interpreting distilled data and reveal that individual distilled data points contain meaningful semantic information.
arXiv Detail & Related papers (2024-06-06T17:28:56Z) - Importance-Aware Adaptive Dataset Distillation [53.79746115426363]
Development of deep learning models is enabled by the availability of large-scale datasets.
dataset distillation aims to synthesize a compact dataset that retains the essential information from the large original dataset.
We propose an importance-aware adaptive dataset distillation (IADD) method that can improve distillation performance.
arXiv Detail & Related papers (2024-01-29T03:29:39Z) - Data Distillation Can Be Like Vodka: Distilling More Times For Better
Quality [78.6359306550245]
We argue that using just one synthetic subset for distillation will not yield optimal generalization performance.
PDD synthesizes multiple small sets of synthetic images, each conditioned on the previous sets, and trains the model on the cumulative union of these subsets.
Our experiments show that PDD can effectively improve the performance of existing dataset distillation methods by up to 4.3%.
arXiv Detail & Related papers (2023-10-10T20:04:44Z) - Distill Gold from Massive Ores: Bi-level Data Pruning towards Efficient Dataset Distillation [96.92250565207017]
We study the data efficiency and selection for the dataset distillation task.
By re-formulating the dynamics of distillation, we provide insight into the inherent redundancy in the real dataset.
We find the most contributing samples based on their causal effects on the distillation.
arXiv Detail & Related papers (2023-05-28T06:53:41Z) - A Comprehensive Survey of Dataset Distillation [73.15482472726555]
It has become challenging to handle the unlimited growth of data with limited computing power.
Deep learning technology has developed unprecedentedly in the last decade.
This paper provides a holistic understanding of dataset distillation from multiple aspects.
arXiv Detail & Related papers (2023-01-13T15:11:38Z) - Dataset Distillation by Matching Training Trajectories [75.9031209877651]
We propose a new formulation that optimize our distilled data to guide networks to a similar state as those trained on real data.
Given a network, we train it for several iterations on our distilled data and optimize the distilled data with respect to the distance between the synthetically trained parameters and the parameters trained on real data.
Our method handily outperforms existing methods and also allows us to distill higher-resolution visual data.
arXiv Detail & Related papers (2022-03-22T17:58:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.