Consequence-aware Sequential Counterfactual Generation
- URL: http://arxiv.org/abs/2104.05592v1
- Date: Mon, 12 Apr 2021 16:10:03 GMT
- Title: Consequence-aware Sequential Counterfactual Generation
- Authors: Philip Naumann and Eirini Ntoutsi
- Abstract summary: We propose a model-agnostic method for sequential counterfactual generation.
Our approach generates less costly solutions, is more efficient, and provides the user with a diverse set of solutions to choose from.
- Score: 5.71097144710995
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Counterfactuals have become a popular technique nowadays for interacting with
black-box machine learning models and understanding how to change a particular
instance to obtain a desired outcome from the model. However, most existing
approaches assume instant materialization of these changes, ignoring that they
may require effort and a specific order of application. Recently, methods have
been proposed that also consider the order in which actions are applied,
leading to the so-called sequential counterfactual generation problem.
In this work, we propose a model-agnostic method for sequential
counterfactual generation. We formulate the task as a multi-objective
optimization problem and present an evolutionary approach to find optimal
sequences of actions leading to the counterfactuals. Our cost model considers
not only the direct effect of an action, but also its consequences.
Experimental results show that compared to state of the art, our approach
generates less costly solutions, is more efficient, and provides the user with
a diverse set of solutions to choose from.
Related papers
- Denoising Pre-Training and Customized Prompt Learning for Efficient Multi-Behavior Sequential Recommendation [69.60321475454843]
We propose DPCPL, the first pre-training and prompt-tuning paradigm tailored for Multi-Behavior Sequential Recommendation.
In the pre-training stage, we propose a novel Efficient Behavior Miner (EBM) to filter out the noise at multiple time scales.
Subsequently, we propose to tune the pre-trained model in a highly efficient manner with the proposed Customized Prompt Learning (CPL) module.
arXiv Detail & Related papers (2024-08-21T06:48:38Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - A Variational Inference Approach to Inverse Problems with Gamma
Hyperpriors [60.489902135153415]
This paper introduces a variational iterative alternating scheme for hierarchical inverse problems with gamma hyperpriors.
The proposed variational inference approach yields accurate reconstruction, provides meaningful uncertainty quantification, and is easy to implement.
arXiv Detail & Related papers (2021-11-26T06:33:29Z) - Sample Efficient Reinforcement Learning via Model-Ensemble Exploration
and Exploitation [3.728946517493471]
MEEE is a model-ensemble method that consists of optimistic exploration and weighted exploitation.
Our approach outperforms other model-free and model-based state-of-the-art methods, especially in sample complexity.
arXiv Detail & Related papers (2021-07-05T07:18:20Z) - Automated Decision-based Adversarial Attacks [48.01183253407982]
We consider the practical and challenging decision-based black-box adversarial setting.
Under this setting, the attacker can only acquire the final classification labels by querying the target model.
We propose to automatically discover decision-based adversarial attack algorithms.
arXiv Detail & Related papers (2021-05-09T13:15:10Z) - Ordered Counterfactual Explanation by Mixed-Integer Linear Optimization [10.209615216208888]
We propose a new framework called Ordered Counterfactual Explanation (OrdCE)
We introduce a new objective function that evaluates a pair of an action and an order based on feature interaction.
Numerical experiments on real datasets demonstrated the effectiveness of our OrdCE in comparison with unordered CE methods.
arXiv Detail & Related papers (2020-12-22T01:41:23Z) - AdaLead: A simple and robust adaptive greedy search algorithm for
sequence design [55.41644538483948]
We develop an easy-to-directed, scalable, and robust evolutionary greedy algorithm (AdaLead)
AdaLead is a remarkably strong benchmark that out-competes more complex state of the art approaches in a variety of biologically motivated sequence design challenges.
arXiv Detail & Related papers (2020-10-05T16:40:38Z) - Benchmarking deep inverse models over time, and the neural-adjoint
method [3.4376560669160394]
We consider the task of solving generic inverse problems, where one wishes to determine the hidden parameters of a natural system.
We conceptualize these models as different schemes for efficiently, but randomly, exploring the space of possible inverse solutions.
We compare several state-of-the-art inverse modeling approaches on four benchmark tasks.
arXiv Detail & Related papers (2020-09-27T18:32:06Z) - Stepwise Model Selection for Sequence Prediction via Deep Kernel
Learning [100.83444258562263]
We propose a novel Bayesian optimization (BO) algorithm to tackle the challenge of model selection in this setting.
In order to solve the resulting multiple black-box function optimization problem jointly and efficiently, we exploit potential correlations among black-box functions.
We are the first to formulate the problem of stepwise model selection (SMS) for sequence prediction, and to design and demonstrate an efficient joint-learning algorithm for this purpose.
arXiv Detail & Related papers (2020-01-12T09:42:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.