Model-agnostic interpretation by visualization of feature perturbations
- URL: http://arxiv.org/abs/2101.10502v1
- Date: Tue, 26 Jan 2021 00:53:29 GMT
- Title: Model-agnostic interpretation by visualization of feature perturbations
- Authors: Wilson E. Marc\'ilio-Jr, Danilo M. Eler, Fabr\'icio Breve
- Abstract summary: We propose a model-agnostic interpretation approach that uses visualization of feature perturbations induced by the particle swarm optimization algorithm.
We validate our approach both qualitatively and quantitatively on publicly available datasets.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Interpretation of machine learning models has become one of the most
important topics of research due to the necessity of maintaining control and
avoid bias in these algorithms. Since many machine learning algorithms are
published every day, there is a need for novel model-agnostic interpretation
approaches that could be used to interpret a great variety of algorithms. One
particularly useful way to interpret machine learning models is to feed
different input data to understand the changes in the prediction. Using such an
approach, practitioners can define relations among patterns of data and a
model's decision. In this work, we propose a model-agnostic interpretation
approach that uses visualization of feature perturbations induced by the
particle swarm optimization algorithm. We validate our approach both
qualitatively and quantitatively on publicly available datasets, showing the
capability to enhance the interpretation of different classifiers while
yielding very stable results if compared with the state of the art algorithms.
Related papers
- Selecting Interpretability Techniques for Healthcare Machine Learning models [69.65384453064829]
In healthcare there is a pursuit for employing interpretable algorithms to assist healthcare professionals in several decision scenarios.
We overview a selection of eight algorithms, both post-hoc and model-based, that can be used for such purposes.
arXiv Detail & Related papers (2024-06-14T17:49:04Z) - Invariant Causal Mechanisms through Distribution Matching [86.07327840293894]
In this work we provide a causal perspective and a new algorithm for learning invariant representations.
Empirically we show that this algorithm works well on a diverse set of tasks and in particular we observe state-of-the-art performance on domain generalization.
arXiv Detail & Related papers (2022-06-23T12:06:54Z) - Model-agnostic multi-objective approach for the evolutionary discovery
of mathematical models [55.41644538483948]
In modern data science, it is more interesting to understand the properties of the model, which parts could be replaced to obtain better results.
We use multi-objective evolutionary optimization for composite data-driven model learning to obtain the algorithm's desired properties.
arXiv Detail & Related papers (2021-07-07T11:17:09Z) - Distilling Interpretable Models into Human-Readable Code [71.11328360614479]
Human-readability is an important and desirable standard for machine-learned model interpretability.
We propose to train interpretable models using conventional methods, and then distill them into concise, human-readable code.
We describe a piecewise-linear curve-fitting algorithm that produces high-quality results efficiently and reliably across a broad range of use cases.
arXiv Detail & Related papers (2021-01-21T01:46:36Z) - Interactive slice visualization for exploring machine learning models [0.0]
We use interactive visualization of slices of predictor space to address the interpretability deficit.
In effect, we open up the black-box of machine learning algorithms, for the purpose of interrogating, explaining, validating and comparing model fits.
arXiv Detail & Related papers (2021-01-18T10:47:53Z) - Learning outside the Black-Box: The pursuit of interpretable models [78.32475359554395]
This paper proposes an algorithm that produces a continuous global interpretation of any given continuous black-box function.
Our interpretation represents a leap forward from the previous state of the art.
arXiv Detail & Related papers (2020-11-17T12:39:44Z) - Model-Agnostic Explanations using Minimal Forcing Subsets [11.420687735660097]
We propose a new model-agnostic algorithm to identify a minimal set of training samples that are indispensable for a given model's decision.
Our algorithm identifies such a set of "indispensable" samples iteratively by solving a constrained optimization problem.
Results show that our algorithm is an effective and easy-to-comprehend tool that helps to better understand local model behavior.
arXiv Detail & Related papers (2020-11-01T22:45:16Z) - Deducing neighborhoods of classes from a fitted model [68.8204255655161]
In this article a new kind of interpretable machine learning method is presented.
It can help to understand the partitioning of the feature space into predicted classes in a classification model using quantile shifts.
Basically, real data points (or specific points of interest) are used and the changes of the prediction after slightly raising or decreasing specific features are observed.
arXiv Detail & Related papers (2020-09-11T16:35:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.