Robust Bayesian Recourse
- URL: http://arxiv.org/abs/2206.10833v1
- Date: Wed, 22 Jun 2022 04:17:17 GMT
- Title: Robust Bayesian Recourse
- Authors: Tuan-Duy H. Nguyen, Ngoc Bui, Duy Nguyen, Man-Chung Yue, Viet Anh
Nguyen
- Abstract summary: Algorithmic recourse aims to recommend an informative feedback to overturn an unfavorable machine learning decision.
We introduce in this paper the Bayesian recourse, a model-agnostic recourse that minimizes the posterior probability odds ratio.
We present our min-max robust counterpart with the goal of hedging against future changes in the machine learning model parameters.
- Score: 13.526999070658231
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Algorithmic recourse aims to recommend an informative feedback to overturn an
unfavorable machine learning decision. We introduce in this paper the Bayesian
recourse, a model-agnostic recourse that minimizes the posterior probability
odds ratio. Further, we present its min-max robust counterpart with the goal of
hedging against future changes in the machine learning model parameters. The
robust counterpart explicitly takes into account possible perturbations of the
data in a Gaussian mixture ambiguity set prescribed using the optimal transport
(Wasserstein) distance. We show that the resulting worst-case objective
function can be decomposed into solving a series of two-dimensional
optimization subproblems, and the min-max recourse finding problem is thus
amenable to a gradient descent algorithm. Contrary to existing methods for
generating robust recourses, the robust Bayesian recourse does not require a
linear approximation step. The numerical experiment demonstrates the
effectiveness of our proposed robust Bayesian recourse facing model shifts. Our
code is available at https://github.com/VinAIResearch/robust-bayesian-recourse.
Related papers
- Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient Kernels [57.46832672991433]
We propose a novel equation discovery method based on Kernel learning and BAyesian Spike-and-Slab priors (KBASS)
We use kernel regression to estimate the target function, which is flexible, expressive, and more robust to data sparsity and noises.
We develop an expectation-propagation expectation-maximization algorithm for efficient posterior inference and function estimation.
arXiv Detail & Related papers (2023-10-09T03:55:09Z) - Distributionally Robust Recourse Action [12.139222986297263]
A recourse action aims to explain a particular algorithmic decision by showing one specific way in which the instance could be modified to receive an alternate outcome.
We propose the Distributionally Robust Recourse Action (DiRRAc) framework, which generates a recourse action that has a high probability of being valid under a mixture of model shifts.
arXiv Detail & Related papers (2023-02-22T08:52:01Z) - Bayesian Optimization for Macro Placement [48.55456716632735]
We develop a novel approach to macro placement using Bayesian optimization (BO) over sequence pairs.
BO is a machine learning technique that uses a probabilistic surrogate model and an acquisition function.
We demonstrate our algorithm on the fixed-outline macro placement problem with the half-perimeter wire length objective.
arXiv Detail & Related papers (2022-07-18T06:17:06Z) - STORM+: Fully Adaptive SGD with Momentum for Nonconvex Optimization [74.1615979057429]
We investigate non-batch optimization problems where the objective is an expectation over smooth loss functions.
Our work builds on the STORM algorithm, in conjunction with a novel approach to adaptively set the learning rate and momentum parameters.
arXiv Detail & Related papers (2021-11-01T15:43:36Z) - RoMA: Robust Model Adaptation for Offline Model-based Optimization [115.02677045518692]
We consider the problem of searching an input maximizing a black-box objective function given a static dataset of input-output queries.
A popular approach to solving this problem is maintaining a proxy model that approximates the true objective function.
Here, the main challenge is how to avoid adversarially optimized inputs during the search.
arXiv Detail & Related papers (2021-10-27T05:37:12Z) - Approximate Bayesian Optimisation for Neural Networks [6.921210544516486]
A body of work has been done to automate machine learning algorithm to highlight the importance of model choice.
The necessity to solve the analytical tractability and the computational feasibility in a idealistic fashion enables to ensure the efficiency and the applicability.
arXiv Detail & Related papers (2021-08-27T19:03:32Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - Parallel Stochastic Mirror Descent for MDPs [72.75921150912556]
We consider the problem of learning the optimal policy for infinite-horizon Markov decision processes (MDPs)
Some variant of Mirror Descent is proposed for convex programming problems with Lipschitz-continuous functionals.
We analyze this algorithm in a general case and obtain an estimate of the convergence rate that does not accumulate errors during the operation of the method.
arXiv Detail & Related papers (2021-02-27T19:28:39Z) - Slowly Varying Regression under Sparsity [5.22980614912553]
We present the framework of slowly hyper regression under sparsity, allowing regression models to exhibit slow and sparse variations.
We suggest a procedure that reformulates as a binary convex algorithm.
We show that the resulting model outperforms competing formulations in comparable times across various datasets.
arXiv Detail & Related papers (2021-02-22T04:51:44Z) - Scalable Combinatorial Bayesian Optimization with Tractable Statistical
models [44.25245545568633]
We study the problem of optimizing blackbox functions over Relaxation spaces (e.g., sets, sequences, trees, and graphs)
Based on recent advances in submodular relaxation, we study an approach as Parametrized Submodular (PSR) towards the goal of improving the scalability and accuracy of solving AFO problems for BOCS model.
Experiments on diverse benchmark problems show significant improvements with PSR for BOCS model.
arXiv Detail & Related papers (2020-08-18T22:56:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.