Learning-Augmented Robust Algorithmic Recourse
- URL: http://arxiv.org/abs/2410.01580v1
- Date: Wed, 2 Oct 2024 14:15:32 GMT
- Title: Learning-Augmented Robust Algorithmic Recourse
- Authors: Kshitij Kayastha, Vasilis Gkatzelis, Shahin Jabbari,
- Abstract summary: Algorithmic recourse provides suggestions of minimum-cost improvements to achieve a desirable outcome in the future.
Machine learning models often get updated over time and this can cause a recourse to become invalid.
We propose a novel algorithm for this problem, study the robustness-consistency trade-off, and analyze how prediction accuracy affects performance.
- Score: 7.217269034256654
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: The widespread use of machine learning models in high-stakes domains can have a major negative impact, especially on individuals who receive undesirable outcomes. Algorithmic recourse provides such individuals with suggestions of minimum-cost improvements they can make to achieve a desirable outcome in the future. However, machine learning models often get updated over time and this can cause a recourse to become invalid (i.e., not lead to the desirable outcome). The robust recourse literature aims to choose recourses that are less sensitive, even against adversarial model changes, but this comes at a higher cost. To overcome this obstacle, we initiate the study of algorithmic recourse through the learning-augmented framework and evaluate the extent to which a designer equipped with a prediction regarding future model changes can reduce the cost of recourse when the prediction is accurate (consistency) while also limiting the cost even when the prediction is inaccurate (robustness). We propose a novel algorithm for this problem, study the robustness-consistency trade-off, and analyze how prediction accuracy affects performance.
Related papers
- Optimistic Regret Bounds for Online Learning in Adversarial Markov Decision Processes [5.116582735311639]
We introduce and study a new variant of AMDP, which aims to minimize regret while utilizing a set of cost predictors.
We develop a new policy search method that achieves a sublinear optimistic regret with high probability, that is a regret bound which gracefully degrades with the estimation power of the cost predictors.
arXiv Detail & Related papers (2024-05-03T15:44:31Z) - Learning-Augmented Algorithms with Explicit Predictors [67.02156211760415]
Recent advances in algorithmic design show how to utilize predictions obtained by machine learning models from past and present data.
Prior research in this context was focused on a paradigm where the predictor is pre-trained on past data and then used as a black box.
In this work, we unpack the predictor and integrate the learning problem it gives rise for within the algorithmic challenge.
arXiv Detail & Related papers (2024-03-12T08:40:21Z) - Re-thinking Data Availablity Attacks Against Deep Neural Networks [53.64624167867274]
In this paper, we re-examine the concept of unlearnable examples and discern that the existing robust error-minimizing noise presents an inaccurate optimization objective.
We introduce a novel optimization paradigm that yields improved protection results with reduced computational time requirements.
arXiv Detail & Related papers (2023-05-18T04:03:51Z) - Learning Sample Difficulty from Pre-trained Models for Reliable
Prediction [55.77136037458667]
We propose to utilize large-scale pre-trained models to guide downstream model training with sample difficulty-aware entropy regularization.
We simultaneously improve accuracy and uncertainty calibration across challenging benchmarks.
arXiv Detail & Related papers (2023-04-20T07:29:23Z) - Probabilistically Robust Recourse: Navigating the Trade-offs between
Costs and Robustness in Algorithmic Recourse [34.39887495671287]
We propose an objective function which simultaneously minimizes the gap between the achieved (resulting) and desired recourse invalidation rates.
We develop novel theoretical results to characterize the recourse invalidation rates corresponding to any given instance.
Experimental evaluation with multiple real world datasets demonstrates the efficacy of the proposed framework.
arXiv Detail & Related papers (2022-03-13T21:39:24Z) - Learning Uncertainty with Artificial Neural Networks for Improved
Remaining Time Prediction of Business Processes [0.15229257192293202]
This paper is the first to apply these techniques to predictive process monitoring.
We found that they contribute towards more accurate predictions and work quickly.
This leads to many interesting applications, enables an earlier adoption of prediction systems with smaller datasets and fosters a better cooperation with humans.
arXiv Detail & Related papers (2021-05-12T10:18:57Z) - Towards Robust and Reliable Algorithmic Recourse [11.887537452826624]
We propose a novel framework, RObust Algorithmic Recourse (ROAR), that leverages adversarial training for finding recourses that are robust to model shifts.
We also carry out detailed theoretical analysis which underscores the importance of constructing recourses that are robust to model shifts.
arXiv Detail & Related papers (2021-02-26T17:38:52Z) - A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via
Adversarial Fine-tuning [90.44219200633286]
We propose a simple yet very effective adversarial fine-tuning approach based on a $textitslow start, fast decay$ learning rate scheduling strategy.
Experimental results show that the proposed adversarial fine-tuning approach outperforms the state-of-the-art methods on CIFAR-10, CIFAR-100 and ImageNet datasets.
arXiv Detail & Related papers (2020-12-25T20:50:15Z) - A comparison of Monte Carlo dropout and bootstrap aggregation on the
performance and uncertainty estimation in radiation therapy dose prediction
with deep learning neural networks [0.46180371154032895]
We propose to use Monte Carlo dropout (MCDO) and the bootstrap aggregation (bagging) technique on deep learning models to produce uncertainty estimations for radiation therapy dose prediction.
Performance-wise, bagging provides statistically significant reduced loss value and errors in most of the metrics investigated.
arXiv Detail & Related papers (2020-11-01T00:24:43Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z) - Value-driven Hindsight Modelling [68.658900923595]
Value estimation is a critical component of the reinforcement learning (RL) paradigm.
Model learning can make use of the rich transition structure present in sequences of observations, but this approach is usually not sensitive to the reward function.
We develop an approach for representation learning in RL that sits in between these two extremes.
This provides tractable prediction targets that are directly relevant for a task, and can thus accelerate learning the value function.
arXiv Detail & Related papers (2020-02-19T18:10:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.