User Driven Model Adjustment via Boolean Rule Explanations
- URL: http://arxiv.org/abs/2203.15071v1
- Date: Mon, 28 Mar 2022 20:27:02 GMT
- Title: User Driven Model Adjustment via Boolean Rule Explanations
- Authors: Elizabeth M. Daly, Massimiliano Mattetti, \"Oznur Alkan, Rahul Nair
- Abstract summary: We present a solution which leverages the predictive power of ML models while allowing the user to specify modifications to decision boundaries.
Our interactive overlay approach achieves this goal without requiring model retraining.
We demonstrate that user feedback rules can be layered with the ML predictions to provide immediate changes which in turn supports learning with less data.
- Score: 7.814304432499296
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: AI solutions are heavily dependant on the quality and accuracy of the input
training data, however the training data may not always fully reflect the most
up-to-date policy landscape or may be missing business logic. The advances in
explainability have opened the possibility of allowing users to interact with
interpretable explanations of ML predictions in order to inject modifications
or constraints that more accurately reflect current realities of the system. In
this paper, we present a solution which leverages the predictive power of ML
models while allowing the user to specify modifications to decision boundaries.
Our interactive overlay approach achieves this goal without requiring model
retraining, making it appropriate for systems that need to apply instant
changes to their decision making. We demonstrate that user feedback rules can
be layered with the ML predictions to provide immediate changes which in turn
supports learning with less data.
Related papers
- Self-Healing Machine Learning: A Framework for Autonomous Adaptation in Real-World Environments [50.310636905746975]
Real-world machine learning systems often encounter model performance degradation due to distributional shifts in the underlying data generating process.
Existing approaches to addressing shifts, such as concept drift adaptation, are limited by their reason-agnostic nature.
We propose self-healing machine learning (SHML) to overcome these limitations.
arXiv Detail & Related papers (2024-10-31T20:05:51Z) - Accelerating Large Language Model Inference with Self-Supervised Early Exits [0.0]
This paper presents a novel technique for accelerating inference in large, pre-trained language models (LLMs)
We propose the integration of early exit ''heads'' atop existing transformer layers, which facilitate conditional terminations based on a confidence metric.
arXiv Detail & Related papers (2024-07-30T07:58:28Z) - Self-Augmented Preference Optimization: Off-Policy Paradigms for Language Model Alignment [104.18002641195442]
We introduce Self-Augmented Preference Optimization (SAPO), an effective and scalable training paradigm that does not require existing paired data.
Building on the self-play concept, which autonomously generates negative responses, we further incorporate an off-policy learning pipeline to enhance data exploration and exploitation.
arXiv Detail & Related papers (2024-05-31T14:21:04Z) - Tuning-Free Accountable Intervention for LLM Deployment -- A
Metacognitive Approach [55.613461060997004]
Large Language Models (LLMs) have catalyzed transformative advances across a spectrum of natural language processing tasks.
We propose an innovative textitmetacognitive approach, dubbed textbfCLEAR, to equip LLMs with capabilities for self-aware error identification and correction.
arXiv Detail & Related papers (2024-03-08T19:18:53Z) - Introducing User Feedback-based Counterfactual Explanations (UFCE) [49.1574468325115]
Counterfactual explanations (CEs) have emerged as a viable solution for generating comprehensible explanations in XAI.
UFCE allows for the inclusion of user constraints to determine the smallest modifications in the subset of actionable features.
UFCE outperforms two well-known CE methods in terms of textitproximity, textitsparsity, and textitfeasibility.
arXiv Detail & Related papers (2024-02-26T20:09:44Z) - Active Inference on the Edge: A Design Study [5.815300670677979]
Active Inference (ACI) is a concept from neuroscience that describes how the brain constantly predicts and evaluates sensory information to decrease long-term surprise.
We show how our ACI agent was able to quickly and traceably solve an optimization problem while fulfilling requirements.
arXiv Detail & Related papers (2023-11-17T16:03:04Z) - FROTE: Feedback Rule-Driven Oversampling for Editing Models [14.112993602274457]
We focus on user-provided feedback rules as a way to expedite the ML models update process.
We introduce the problem of pre-processing training data to edit an ML model in response to feedback rules.
To solve this problem, we propose a novel data augmentation method, the Feedback Rule-Based Oversampling Technique.
arXiv Detail & Related papers (2022-01-04T10:16:13Z) - Towards Model-informed Precision Dosing with Expert-in-the-loop Machine
Learning [0.0]
We consider a ML framework that may accelerate model learning and improve its interpretability by incorporating human experts into the model learning loop.
We propose a novel human-in-the-loop ML framework aimed at dealing with learning problems that the cost of data annotation is high.
With an application to precision dosing, our experimental results show that the approach can learn interpretable rules from data and may potentially lower experts' workload.
arXiv Detail & Related papers (2021-06-28T03:45:09Z) - Model-agnostic and Scalable Counterfactual Explanations via
Reinforcement Learning [0.5729426778193398]
We propose a deep reinforcement learning approach that transforms the optimization procedure into an end-to-end learnable process.
Our experiments on real-world data show that our method is model-agnostic, relying only on feedback from model predictions.
arXiv Detail & Related papers (2021-06-04T16:54:36Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z) - An Information-Theoretic Approach to Personalized Explainable Machine
Learning [92.53970625312665]
We propose a simple probabilistic model for the predictions and user knowledge.
We quantify the effect of an explanation by the conditional mutual information between the explanation and prediction.
arXiv Detail & Related papers (2020-03-01T13:06:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.