Strategic Learning with Local Explanations as Feedback
- URL: http://arxiv.org/abs/2502.04058v1
- Date: Thu, 06 Feb 2025 13:17:24 GMT
- Title: Strategic Learning with Local Explanations as Feedback
- Authors: Kiet Q. H. Vo, Siu Lun Chau, Masahiro Kato, Yixin Wang, Krikamol Muandet,
- Abstract summary: Action recommendation (AR)-based explanations are sufficient for non-harmful responses.
We propose a simple algorithm to jointly optimise the predictive model and AR policy to balance DM outcomes with agent welfare.
- Score: 29.57116418734347
- License:
- Abstract: We investigate algorithmic decision problems where agents can respond strategically to the decision maker's (DM) models. The demand for clear and actionable explanations from DMs to (potentially strategic) agents continues to rise. While prior work often treats explanations as full model disclosures, explanations in practice might convey only partial information, which can lead to misinterpretations and harmful responses. When full disclosure of the predictive model is neither feasible nor desirable, a key open question is how DMs can use explanations to maximise their utility without compromising agent welfare. In this work, we explore well-known local and global explanation methods, and establish a necessary condition to prevent explanations from misleading agents into self-harming actions. Moreover, with conditional homogeneity, we establish that action recommendation (AR)-based explanations are sufficient for non-harmful responses, akin to the revelation principle in information design. To operationalise AR-based explanations, we propose a simple algorithm to jointly optimise the predictive model and AR policy to balance DM outcomes with agent welfare. Our empirical results demonstrate the benefits of this approach as a more refined strategy for safe and effective partial model disclosure in algorithmic decision-making.
Related papers
- On Generating Monolithic and Model Reconciling Explanations in Probabilistic Scenarios [46.752418052725126]
We propose a novel framework for generating probabilistic monolithic explanations and model reconciling explanations.
For monolithic explanations, our approach integrates uncertainty by utilizing probabilistic logic to increase the probability of the explanandum.
For model reconciling explanations, we propose a framework that extends the logic-based variant of the model reconciliation problem to account for probabilistic human models.
arXiv Detail & Related papers (2024-05-29T16:07:31Z) - Robust Explainable Recommendation [10.186029242664931]
We present a general framework for feature-aware explainable recommenders that can withstand external attacks.
Our framework is simple to implement and supports different methods regardless of the internal model structure and intrinsic utility within any model.
arXiv Detail & Related papers (2024-05-03T05:03:07Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Explainable Data-Driven Optimization: From Context to Decision and Back
Again [76.84947521482631]
Data-driven optimization uses contextual information and machine learning algorithms to find solutions to decision problems with uncertain parameters.
We introduce a counterfactual explanation methodology tailored to explain solutions to data-driven problems.
We demonstrate our approach by explaining key problems in operations management such as inventory management and routing.
arXiv Detail & Related papers (2023-01-24T15:25:16Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Insights into Data through Model Behaviour: An Explainability-driven
Strategy for Data Auditing for Responsible Computer Vision Applications [70.92379567261304]
This study explores an explainability-driven strategy to data auditing.
We demonstrate this strategy by auditing two popular medical benchmark datasets.
We discover hidden data quality issues that lead deep learning models to make predictions for the wrong reasons.
arXiv Detail & Related papers (2021-06-16T23:46:39Z) - Feature-Based Interpretable Reinforcement Learning based on
State-Transition Models [3.883460584034766]
Growing concerns regarding the operational usage of AI models in the real-world has caused a surge of interest in explaining AI models' decisions to humans.
We propose a method for offering local explanations on risk in reinforcement learning.
arXiv Detail & Related papers (2021-05-14T23:43:11Z) - Privacy-Constrained Policies via Mutual Information Regularized Policy Gradients [54.98496284653234]
We consider the task of training a policy that maximizes reward while minimizing disclosure of certain sensitive state variables through the actions.
We solve this problem by introducing a regularizer based on the mutual information between the sensitive state and the actions.
We develop a model-based estimator for optimization of privacy-constrained policies.
arXiv Detail & Related papers (2020-12-30T03:22:35Z) - Beyond Individualized Recourse: Interpretable and Interactive Summaries
of Actionable Recourses [14.626432428431594]
We propose a novel model framework called Actionable Recourse agnostic (AReS) to construct global counterfactual explanations.
We formulate a novel objective which simultaneously optimize for correctness of the recourses and interpretability of the explanations.
Our framework can provide decision makers with a comprehensive overview of recourses corresponding to any black box model.
arXiv Detail & Related papers (2020-09-15T15:14:08Z) - Decisions, Counterfactual Explanations and Strategic Behavior [16.980621769406923]
We find policies and counterfactual explanations that are optimal in terms of utility in a strategic setting.
We show that, given a pre-defined policy, the problem of finding the optimal set of counterfactual explanations is NP-hard.
We demonstrate that, by incorporating a matroid constraint into the problem formulation, we can increase the diversity of the optimal set of counterfactual explanations.
arXiv Detail & Related papers (2020-02-11T12:04:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.