Actionable Interpretation of Machine Learning Models for Sequential
Data: Dementia-related Agitation Use Case
- URL: http://arxiv.org/abs/2009.05097v1
- Date: Thu, 10 Sep 2020 19:04:12 GMT
- Title: Actionable Interpretation of Machine Learning Models for Sequential
Data: Dementia-related Agitation Use Case
- Authors: Nutta Homdee, John Lach
- Abstract summary: Actionable interpretation can be implemented in most traditional black-box machine learning models.
It uses the already trained model, used training data, and data processing techniques to extract actionable items.
It is shown that actionable items can be extracted, such as the decreasing of in-home light level, which is triggering an agitation episode.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning has shown successes for complex learning problems in which
data/parameters can be multidimensional and too complex for a first-principles
based analysis. Some applications that utilize machine learning require human
interpretability, not just to understand a particular result (classification,
detection, etc.) but also for humans to take action based on that result.
Black-box machine learning model interpretation has been studied, but recent
work has focused on validation and improving model performance. In this work,
an actionable interpretation of black-box machine learning models is presented.
The proposed technique focuses on the extraction of actionable measures to help
users make a decision or take an action. Actionable interpretation can be
implemented in most traditional black-box machine learning models. It uses the
already trained model, used training data, and data processing techniques to
extract actionable items from the model outcome and its time-series inputs. An
implementation of the actionable interpretation is shown with a use case:
dementia-related agitation prediction and the ambient environment. It is shown
that actionable items can be extracted, such as the decreasing of in-home light
level, which is triggering an agitation episode. This use case of actionable
interpretation can help dementia caregivers take action to intervene and
prevent agitation.
Related papers
- Machine Unlearning for Causal Inference [0.6621714555125157]
It is important to enable the model to forget some of its learning/captured information about a given user (machine unlearning)
This paper introduces the concept of machine unlearning for causal inference, particularly propensity score matching and treatment effect estimation.
The dataset used in the study is the Lalonde dataset, a widely used dataset for evaluating the effectiveness of job training programs.
arXiv Detail & Related papers (2023-08-24T17:27:01Z) - Matched Machine Learning: A Generalized Framework for Treatment Effect
Inference With Learned Metrics [87.05961347040237]
We introduce Matched Machine Learning, a framework that combines the flexibility of machine learning black boxes with the interpretability of matching.
Our framework uses machine learning to learn an optimal metric for matching units and estimating outcomes.
We show empirically that instances of Matched Machine Learning perform on par with black-box machine learning methods and better than existing matching methods for similar problems.
arXiv Detail & Related papers (2023-04-03T19:32:30Z) - An Interactive Visualization Tool for Understanding Active Learning [12.345164513513671]
We present an interactive visualization tool to elucidate the training process of active learning.
The tool enables one to select a sample of interesting data points, view how their prediction values change at different querying stages, and thus better understand when and how active learning works.
arXiv Detail & Related papers (2021-11-09T03:33:26Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - A Weighted Solution to SVM Actionability and Interpretability [0.0]
Actionability is as important as interpretability or explainability of machine learning models, an ongoing and important research topic.
This paper finds a solution to the question of actionability on both linear and non-linear SVM models.
arXiv Detail & Related papers (2020-12-06T20:35:25Z) - Learning outside the Black-Box: The pursuit of interpretable models [78.32475359554395]
This paper proposes an algorithm that produces a continuous global interpretation of any given continuous black-box function.
Our interpretation represents a leap forward from the previous state of the art.
arXiv Detail & Related papers (2020-11-17T12:39:44Z) - Deducing neighborhoods of classes from a fitted model [68.8204255655161]
In this article a new kind of interpretable machine learning method is presented.
It can help to understand the partitioning of the feature space into predicted classes in a classification model using quantile shifts.
Basically, real data points (or specific points of interest) are used and the changes of the prediction after slightly raising or decreasing specific features are observed.
arXiv Detail & Related papers (2020-09-11T16:35:53Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z) - A Semiparametric Approach to Interpretable Machine Learning [9.87381939016363]
Black box models in machine learning have demonstrated excellent predictive performance in complex problems and high-dimensional settings.
Their lack of transparency and interpretability restrict the applicability of such models in critical decision-making processes.
We propose a novel approach to trading off interpretability and performance in prediction models using ideas from semiparametric statistics.
arXiv Detail & Related papers (2020-06-08T16:38:15Z) - Learning Predictive Models From Observation and Interaction [137.77887825854768]
Learning predictive models from interaction with the world allows an agent, such as a robot, to learn about how the world works.
However, learning a model that captures the dynamics of complex skills represents a major challenge.
We propose a method to augment the training set with observational data of other agents, such as humans.
arXiv Detail & Related papers (2019-12-30T01:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.