A Generic Approach for Reproducible Model Distillation
- URL: http://arxiv.org/abs/2211.12631v3
- Date: Thu, 27 Apr 2023 21:04:30 GMT
- Title: A Generic Approach for Reproducible Model Distillation
- Authors: Yunzhe Zhou, Peiru Xu, Giles Hooker
- Abstract summary: We develop a generic approach for stable model distillation based on central limit theorem for the average loss.
We demonstrate the application of our proposed approach on three commonly used intelligible models: decision trees, falling rule lists and symbolic regression.
- Score: 2.457924087844968
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Model distillation has been a popular method for producing interpretable
machine learning. It uses an interpretable "student" model to mimic the
predictions made by the black box "teacher" model. However, when the student
model is sensitive to the variability of the data sets used for training even
when keeping the teacher fixed, the corresponded interpretation is not
reliable. Existing strategies stabilize model distillation by checking whether
a large enough corpus of pseudo-data is generated to reliably reproduce student
models, but methods to do so have so far been developed for a specific student
model. In this paper, we develop a generic approach for stable model
distillation based on central limit theorem for the average loss. We start with
a collection of candidate student models and search for candidates that
reasonably agree with the teacher. Then we construct a multiple testing
framework to select a corpus size such that the consistent student model would
be selected under different pseudo samples. We demonstrate the application of
our proposed approach on three commonly used intelligible models: decision
trees, falling rule lists and symbolic regression. Finally, we conduct
simulation experiments on Mammographic Mass and Breast Cancer datasets and
illustrate the testing procedure throughout a theoretical analysis with Markov
process. The code is publicly available at
https://github.com/yunzhe-zhou/GenericDistillation.
Related papers
- Dual Student Networks for Data-Free Model Stealing [79.67498803845059]
Two main challenges are estimating gradients of the target model without access to its parameters, and generating a diverse set of training samples.
We propose a Dual Student method where two students are symmetrically trained in order to provide the generator a criterion to generate samples that the two students disagree on.
We show that our new optimization framework provides more accurate gradient estimation of the target model and better accuracies on benchmark classification datasets.
arXiv Detail & Related papers (2023-09-18T18:11:31Z) - Improving Heterogeneous Model Reuse by Density Estimation [105.97036205113258]
This paper studies multiparty learning, aiming to learn a model using the private data of different participants.
Model reuse is a promising solution for multiparty learning, assuming that a local model has been trained for each party.
arXiv Detail & Related papers (2023-05-23T09:46:54Z) - Model-based micro-data reinforcement learning: what are the crucial
model properties and which model to choose? [0.2836066255205732]
We contribute to micro-data model-based reinforcement learning (MBRL) by rigorously comparing popular generative models.
We find that on an environment that requires multimodal posterior predictives, mixture density nets outperform all other models by a large margin.
We also found that deterministic models are on par, in fact they consistently (although non-significantly) outperform their probabilistic counterparts.
arXiv Detail & Related papers (2021-07-24T11:38:25Z) - Distilling Interpretable Models into Human-Readable Code [71.11328360614479]
Human-readability is an important and desirable standard for machine-learned model interpretability.
We propose to train interpretable models using conventional methods, and then distill them into concise, human-readable code.
We describe a piecewise-linear curve-fitting algorithm that produces high-quality results efficiently and reliably across a broad range of use cases.
arXiv Detail & Related papers (2021-01-21T01:46:36Z) - Generative Temporal Difference Learning for Infinite-Horizon Prediction [101.59882753763888]
We introduce the $gamma$-model, a predictive model of environment dynamics with an infinite probabilistic horizon.
We discuss how its training reflects an inescapable tradeoff between training-time and testing-time compounding errors.
arXiv Detail & Related papers (2020-10-27T17:54:12Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - Evaluating the Disentanglement of Deep Generative Models through
Manifold Topology [66.06153115971732]
We present a method for quantifying disentanglement that only uses the generative model.
We empirically evaluate several state-of-the-art models across multiple datasets.
arXiv Detail & Related papers (2020-06-05T20:54:11Z) - Symbolic Regression Driven by Training Data and Prior Knowledge [0.0]
In symbolic regression, the search for analytic models is driven purely by the prediction error observed on the training data samples.
We propose a multi-objective symbolic regression approach that is driven by both the training data and the prior knowledge of the properties the desired model should manifest.
arXiv Detail & Related papers (2020-04-24T19:15:06Z) - Amortized Bayesian model comparison with evidential deep learning [0.12314765641075436]
We propose a novel method for performing Bayesian model comparison using specialized deep learning architectures.
Our method is purely simulation-based and circumvents the step of explicitly fitting all alternative models under consideration to each observed dataset.
We show that our method achieves excellent results in terms of accuracy, calibration, and efficiency across the examples considered in this work.
arXiv Detail & Related papers (2020-04-22T15:15:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.