An interpretable neural network model through piecewise linear
approximation
- URL: http://arxiv.org/abs/2001.07119v1
- Date: Mon, 20 Jan 2020 14:32:11 GMT
- Title: An interpretable neural network model through piecewise linear
approximation
- Authors: Mengzhuo Guo, Qingpeng Zhang, Xiuwu Liao, Daniel Dajun Zeng
- Abstract summary: We propose a hybrid interpretable model that combines a piecewise linear component and a nonlinear component.
The first component describes the explicit feature contributions by piecewise linear approximation to increase the expressiveness of the model.
The other component uses a multi-layer perceptron to capture feature interactions and implicit nonlinearity, and increase the prediction performance.
- Score: 7.196650216279683
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most existing interpretable methods explain a black-box model in a post-hoc
manner, which uses simpler models or data analysis techniques to interpret the
predictions after the model is learned. However, they (a) may derive
contradictory explanations on the same predictions given different methods and
data samples, and (b) focus on using simpler models to provide higher
descriptive accuracy at the sacrifice of prediction accuracy. To address these
issues, we propose a hybrid interpretable model that combines a piecewise
linear component and a nonlinear component. The first component describes the
explicit feature contributions by piecewise linear approximation to increase
the expressiveness of the model. The other component uses a multi-layer
perceptron to capture feature interactions and implicit nonlinearity, and
increase the prediction performance. Different from the post-hoc approaches,
the interpretability is obtained once the model is learned in the form of
feature shapes. We also provide a variant to explore higher-order interactions
among features to demonstrate that the proposed model is flexible for
adaptation. Experiments demonstrate that the proposed model can achieve good
interpretability by describing feature shapes while maintaining
state-of-the-art accuracy.
Related papers
- Model orthogonalization and Bayesian forecast mixing via Principal Component Analysis [0.0]
In many cases, the models used in the mixing process are similar.
The existence of such similar, or even redundant, models during the multimodeling process can result in misinterpretation of results and deterioration of predictive performance.
We show that by adding modelization to the proposed Bayesian Model Combination framework, one can arrive at better prediction accuracy and reach excellent uncertainty quantification performance.
arXiv Detail & Related papers (2024-05-17T15:01:29Z) - On the Origins of Linear Representations in Large Language Models [51.88404605700344]
We introduce a simple latent variable model to formalize the concept dynamics of the next token prediction.
Experiments show that linear representations emerge when learning from data matching the latent variable model.
We additionally confirm some predictions of the theory using the LLaMA-2 large language model.
arXiv Detail & Related papers (2024-03-06T17:17:36Z) - Bayesian Neural Network Inference via Implicit Models and the Posterior
Predictive Distribution [0.8122270502556371]
We propose a novel approach to perform approximate Bayesian inference in complex models such as Bayesian neural networks.
The approach is more scalable to large data than Markov Chain Monte Carlo.
We see this being useful in applications such as surrogate and physics-based models.
arXiv Detail & Related papers (2022-09-06T02:43:19Z) - Pathologies of Pre-trained Language Models in Few-shot Fine-tuning [50.3686606679048]
We show that pre-trained language models with few examples show strong prediction bias across labels.
Although few-shot fine-tuning can mitigate the prediction bias, our analysis shows models gain performance improvement by capturing non-task-related features.
These observations alert that pursuing model performance with fewer examples may incur pathological prediction behavior.
arXiv Detail & Related papers (2022-04-17T15:55:18Z) - Model-agnostic multi-objective approach for the evolutionary discovery
of mathematical models [55.41644538483948]
In modern data science, it is more interesting to understand the properties of the model, which parts could be replaced to obtain better results.
We use multi-objective evolutionary optimization for composite data-driven model learning to obtain the algorithm's desired properties.
arXiv Detail & Related papers (2021-07-07T11:17:09Z) - On the Lack of Robust Interpretability of Neural Text Classifiers [14.685352584216757]
We assess the robustness of interpretations of neural text classifiers based on pretrained Transformer encoders.
Both tests show surprising deviations from expected behavior, raising questions about the extent of insights that practitioners may draw from interpretations.
arXiv Detail & Related papers (2021-06-08T18:31:02Z) - Partially Interpretable Estimators (PIE): Black-Box-Refined
Interpretable Machine Learning [5.479705009242287]
We propose Partially Interpretable Estimators (PIE) which attribute a prediction to individual features via an interpretable model.
We design an iterative training algorithm to jointly train the two types of models.
Experimental results show that PIE is highly competitive to black-box models while outperforming interpretable baselines.
arXiv Detail & Related papers (2021-05-06T03:06:34Z) - Distilling Interpretable Models into Human-Readable Code [71.11328360614479]
Human-readability is an important and desirable standard for machine-learned model interpretability.
We propose to train interpretable models using conventional methods, and then distill them into concise, human-readable code.
We describe a piecewise-linear curve-fitting algorithm that produces high-quality results efficiently and reliably across a broad range of use cases.
arXiv Detail & Related papers (2021-01-21T01:46:36Z) - Explaining and Improving Model Behavior with k Nearest Neighbor
Representations [107.24850861390196]
We propose using k nearest neighbor representations to identify training examples responsible for a model's predictions.
We show that kNN representations are effective at uncovering learned spurious associations.
Our results indicate that the kNN approach makes the finetuned model more robust to adversarial inputs.
arXiv Detail & Related papers (2020-10-18T16:55:25Z) - Understanding Neural Abstractive Summarization Models via Uncertainty [54.37665950633147]
seq2seq abstractive summarization models generate text in a free-form manner.
We study the entropy, or uncertainty, of the model's token-level predictions.
We show that uncertainty is a useful perspective for analyzing summarization and text generation models more broadly.
arXiv Detail & Related papers (2020-10-15T16:57:27Z) - Pair the Dots: Jointly Examining Training History and Test Stimuli for
Model Interpretability [44.60486560836836]
Any prediction from a model is made by a combination of learning history and test stimuli.
Existing methods to interpret a model's predictions are only able to capture a single aspect of either test stimuli or learning history.
We propose an efficient and differentiable approach to make it feasible to interpret a model's prediction by jointly examining training history and test stimuli.
arXiv Detail & Related papers (2020-10-14T10:45:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.