FROTE: Feedback Rule-Driven Oversampling for Editing Models
- URL: http://arxiv.org/abs/2201.01070v1
- Date: Tue, 4 Jan 2022 10:16:13 GMT
- Title: FROTE: Feedback Rule-Driven Oversampling for Editing Models
- Authors: \"Oznur Alkan, Dennis Wei, Massimiliano Matteti, Rahul Nair, Elizabeth
M. Daly, Diptikalyan Saha
- Abstract summary: We focus on user-provided feedback rules as a way to expedite the ML models update process.
We introduce the problem of pre-processing training data to edit an ML model in response to feedback rules.
To solve this problem, we propose a novel data augmentation method, the Feedback Rule-Based Oversampling Technique.
- Score: 14.112993602274457
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning models may involve decision boundaries that change over time
due to updates to rules and regulations, such as in loan approvals or claims
management. However, in such scenarios, it may take time for sufficient
training data to accumulate in order to retrain the model to reflect the new
decision boundaries. While work has been done to reinforce existing decision
boundaries, very little has been done to cover these scenarios where decision
boundaries of the ML models should change in order to reflect new rules. In
this paper, we focus on user-provided feedback rules as a way to expedite the
ML models update process, and we formally introduce the problem of
pre-processing training data to edit an ML model in response to feedback rules
such that once the model is retrained on the pre-processed data, its decision
boundaries align more closely with the rules. To solve this problem, we propose
a novel data augmentation method, the Feedback Rule-Based Oversampling
Technique. Extensive experiments using different ML models and real world
datasets demonstrate the effectiveness of the method, in particular the benefit
of augmentation and the ability to handle many feedback rules.
Related papers
- Influence Functions for Scalable Data Attribution in Diffusion Models [52.92223039302037]
Diffusion models have led to significant advancements in generative modelling.
Yet their widespread adoption poses challenges regarding data attribution and interpretability.
In this paper, we aim to help address such challenges by developing an textitinfluence functions framework.
arXiv Detail & Related papers (2024-10-17T17:59:02Z) - Federated Continual Learning Goes Online: Uncertainty-Aware Memory Management for Vision Tasks and Beyond [13.867793835583463]
We propose an uncertainty-aware memory-based approach to solve catastrophic forgetting.
We retrieve samples with specific characteristics, and - by retraining the model on such samples - we demonstrate the potential of this approach.
arXiv Detail & Related papers (2024-05-29T09:29:39Z) - Induced Model Matching: How Restricted Models Can Help Larger Ones [1.7676816383911753]
We consider scenarios where a very accurate predictive model using restricted features is available at the time of training of a larger, full-featured, model.
How can the restricted model be useful to the full model?
We propose an approach for transferring the knowledge of the restricted model to the full model, by aligning the full model's context-restricted performance with that of the restricted model's.
arXiv Detail & Related papers (2024-02-19T20:21:09Z) - Adapting Large Language Models for Content Moderation: Pitfalls in Data
Engineering and Supervised Fine-tuning [79.53130089003986]
Large Language Models (LLMs) have become a feasible solution for handling tasks in various domains.
In this paper, we introduce how to fine-tune a LLM model that can be privately deployed for content moderation.
arXiv Detail & Related papers (2023-10-05T09:09:44Z) - Learning non-Markovian Decision-Making from State-only Sequences [57.20193609153983]
We develop a model-based imitation of state-only sequences with non-Markov Decision Process (nMDP)
We demonstrate the efficacy of the proposed method in a path planning task with non-Markovian constraints.
arXiv Detail & Related papers (2023-06-27T02:26:01Z) - User Driven Model Adjustment via Boolean Rule Explanations [7.814304432499296]
We present a solution which leverages the predictive power of ML models while allowing the user to specify modifications to decision boundaries.
Our interactive overlay approach achieves this goal without requiring model retraining.
We demonstrate that user feedback rules can be layered with the ML predictions to provide immediate changes which in turn supports learning with less data.
arXiv Detail & Related papers (2022-03-28T20:27:02Z) - ReLACE: Reinforcement Learning Agent for Counterfactual Explanations of
Arbitrary Predictive Models [6.939617874336667]
We introduce a model-agnostic algorithm to generate optimal counterfactual explanations.
Our method is easily applied to any black-box model, as this resembles the environment that the DRL agent interacts with.
In addition, we develop an algorithm to extract explainable decision rules from the DRL agent's policy, so as to make the process of generating CFs itself transparent.
arXiv Detail & Related papers (2021-10-22T17:08:49Z) - CARE: Coherent Actionable Recourse based on Sound Counterfactual
Explanations [0.0]
This paper introduces CARE, a modular explanation framework that addresses the model- and user-level desiderata.
As a model-agnostic approach, CARE generates multiple, diverse explanations for any black-box model.
arXiv Detail & Related papers (2021-08-18T15:26:59Z) - Rewriting a Deep Generative Model [56.91974064348137]
We introduce a new problem setting: manipulation of specific rules encoded by a deep generative model.
We propose a formulation in which the desired rule is changed by manipulating a layer of a deep network as a linear associative memory.
We present a user interface to enable users to interactively change the rules of a generative model to achieve desired effects.
arXiv Detail & Related papers (2020-07-30T17:58:16Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z) - Explainable Matrix -- Visualization for Global and Local
Interpretability of Random Forest Classification Ensembles [78.6363825307044]
We propose Explainable Matrix (ExMatrix), a novel visualization method for Random Forest (RF) interpretability.
It employs a simple yet powerful matrix-like visual metaphor, where rows are rules, columns are features, and cells are rules predicates.
ExMatrix applicability is confirmed via different examples, showing how it can be used in practice to promote RF models interpretability.
arXiv Detail & Related papers (2020-05-08T21:03:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.