Actionable Interpretation of Machine Learning Models for Sequential
Data: Dementia-related Agitation Use Case
- URL: http://arxiv.org/abs/2009.05097v1
- Date: Thu, 10 Sep 2020 19:04:12 GMT
- Title: Actionable Interpretation of Machine Learning Models for Sequential
Data: Dementia-related Agitation Use Case
- Authors: Nutta Homdee, John Lach
- Abstract summary: Actionable interpretation can be implemented in most traditional black-box machine learning models.
It uses the already trained model, used training data, and data processing techniques to extract actionable items.
It is shown that actionable items can be extracted, such as the decreasing of in-home light level, which is triggering an agitation episode.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning has shown successes for complex learning problems in which
data/parameters can be multidimensional and too complex for a first-principles
based analysis. Some applications that utilize machine learning require human
interpretability, not just to understand a particular result (classification,
detection, etc.) but also for humans to take action based on that result.
Black-box machine learning model interpretation has been studied, but recent
work has focused on validation and improving model performance. In this work,
an actionable interpretation of black-box machine learning models is presented.
The proposed technique focuses on the extraction of actionable measures to help
users make a decision or take an action. Actionable interpretation can be
implemented in most traditional black-box machine learning models. It uses the
already trained model, used training data, and data processing techniques to
extract actionable items from the model outcome and its time-series inputs. An
implementation of the actionable interpretation is shown with a use case:
dementia-related agitation prediction and the ambient environment. It is shown
that actionable items can be extracted, such as the decreasing of in-home light
level, which is triggering an agitation episode. This use case of actionable
interpretation can help dementia caregivers take action to intervene and
prevent agitation.
Related papers
- Attribute-to-Delete: Machine Unlearning via Datamodel Matching [65.13151619119782]
Machine unlearning -- efficiently removing a small "forget set" training data on a pre-divertrained machine learning model -- has recently attracted interest.
Recent research shows that machine unlearning techniques do not hold up in such a challenging setting.
arXiv Detail & Related papers (2024-10-30T17:20:10Z) - Common Steps in Machine Learning Might Hinder The Explainability Aims in Medicine [0.0]
This paper discusses the steps of the data preprocessing in machine learning and their impacts on the explainability and interpretability of the model.
It is found the steps improve the accuracy of the model, but they might hinder the explainability of the model if they are not carefully considered especially in medicine.
arXiv Detail & Related papers (2024-08-30T12:09:14Z) - Achieving interpretable machine learning by functional decomposition of black-box models into explainable predictor effects [4.3500439062103435]
We propose a novel approach for the functional decomposition of black-box predictions.
Similar to additive regression models, our method provides insights into the direction and strength of the main feature contributions.
arXiv Detail & Related papers (2024-07-26T10:37:29Z) - Machine Unlearning for Causal Inference [0.6621714555125157]
It is important to enable the model to forget some of its learning/captured information about a given user (machine unlearning)
This paper introduces the concept of machine unlearning for causal inference, particularly propensity score matching and treatment effect estimation.
The dataset used in the study is the Lalonde dataset, a widely used dataset for evaluating the effectiveness of job training programs.
arXiv Detail & Related papers (2023-08-24T17:27:01Z) - Matched Machine Learning: A Generalized Framework for Treatment Effect
Inference With Learned Metrics [87.05961347040237]
We introduce Matched Machine Learning, a framework that combines the flexibility of machine learning black boxes with the interpretability of matching.
Our framework uses machine learning to learn an optimal metric for matching units and estimating outcomes.
We show empirically that instances of Matched Machine Learning perform on par with black-box machine learning methods and better than existing matching methods for similar problems.
arXiv Detail & Related papers (2023-04-03T19:32:30Z) - A Weighted Solution to SVM Actionability and Interpretability [0.0]
Actionability is as important as interpretability or explainability of machine learning models, an ongoing and important research topic.
This paper finds a solution to the question of actionability on both linear and non-linear SVM models.
arXiv Detail & Related papers (2020-12-06T20:35:25Z) - Learning outside the Black-Box: The pursuit of interpretable models [78.32475359554395]
This paper proposes an algorithm that produces a continuous global interpretation of any given continuous black-box function.
Our interpretation represents a leap forward from the previous state of the art.
arXiv Detail & Related papers (2020-11-17T12:39:44Z) - Deducing neighborhoods of classes from a fitted model [68.8204255655161]
In this article a new kind of interpretable machine learning method is presented.
It can help to understand the partitioning of the feature space into predicted classes in a classification model using quantile shifts.
Basically, real data points (or specific points of interest) are used and the changes of the prediction after slightly raising or decreasing specific features are observed.
arXiv Detail & Related papers (2020-09-11T16:35:53Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z) - Learning Predictive Models From Observation and Interaction [137.77887825854768]
Learning predictive models from interaction with the world allows an agent, such as a robot, to learn about how the world works.
However, learning a model that captures the dynamics of complex skills represents a major challenge.
We propose a method to augment the training set with observational data of other agents, such as humans.
arXiv Detail & Related papers (2019-12-30T01:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.