"Explain it in the Same Way!" -- Model-Agnostic Group Fairness of
Counterfactual Explanations
- URL: http://arxiv.org/abs/2211.14858v1
- Date: Sun, 27 Nov 2022 15:24:06 GMT
- Title: "Explain it in the Same Way!" -- Model-Agnostic Group Fairness of
Counterfactual Explanations
- Authors: Andr\'e Artelt and Barbara Hammer
- Abstract summary: Counterfactual explanations tell the user what to do in order to change the outcome of the system in a desirable way.
We propose a model-agnostic method for computing counterfactual explanations that do not differ significantly in their complexity between protected groups.
- Score: 8.132423340684568
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Counterfactual explanations are a popular type of explanation for making the
outcomes of a decision making system transparent to the user. Counterfactual
explanations tell the user what to do in order to change the outcome of the
system in a desirable way. However, it was recently discovered that the
recommendations of what to do can differ significantly in their complexity
between protected groups of individuals. Providing more difficult
recommendations of actions to one group leads to a disadvantage of this group
compared to other groups.
In this work we propose a model-agnostic method for computing counterfactual
explanations that do not differ significantly in their complexity between
protected groups.
Related papers
- Rectifying Group Irregularities in Explanations for Distribution Shift [18.801357928801412]
Group-aware Shift Explanations (GSE) produces interpretable explanations by leveraging worst-group optimization to rectify group irregularities.
We show how GSE not only maintains group structures, such as demographic and hierarchical subpopulations, but also enhances feasibility and robustness in the resulting explanations.
arXiv Detail & Related papers (2023-05-25T17:57:46Z) - Explaining Groups of Instances Counterfactually for XAI: A Use Case,
Algorithm and User Study for Group-Counterfactuals [7.22614468437919]
We explore a novel use case in which groups of similar instances are explained in a collective fashion.
Group counterfactuals meet a human preference for coherent, broad explanations covering multiple events/instances.
Results show that group counterfactuals elicit modest but definite improvements in people's understanding of an AI system.
arXiv Detail & Related papers (2023-03-16T13:16:50Z) - Explainable Data-Driven Optimization: From Context to Decision and Back
Again [76.84947521482631]
Data-driven optimization uses contextual information and machine learning algorithms to find solutions to decision problems with uncertain parameters.
We introduce a counterfactual explanation methodology tailored to explain solutions to data-driven problems.
We demonstrate our approach by explaining key problems in operations management such as inventory management and routing.
arXiv Detail & Related papers (2023-01-24T15:25:16Z) - Homomorphism Autoencoder -- Learning Group Structured Representations from Observed Transitions [51.71245032890532]
We propose methods enabling an agent acting upon the world to learn internal representations of sensory information consistent with actions that modify it.
In contrast to existing work, our approach does not require prior knowledge of the group and does not restrict the set of actions the agent can perform.
arXiv Detail & Related papers (2022-07-25T11:22:48Z) - From Intrinsic to Counterfactual: On the Explainability of
Contextualized Recommender Systems [43.93801836660617]
We show that by utilizing the contextual features (e.g., item reviews from users), we can design a series of explainable recommender systems.
We propose three types of explainable recommendation strategies with gradual change of model transparency: whitebox, graybox, and blackbox.
Our model achieves highly competitive ranking performance, and generates accurate and effective explanations in terms of numerous quantitative metrics and qualitative visualizations.
arXiv Detail & Related papers (2021-10-28T01:54:04Z) - DeepGroup: Representation Learning for Group Recommendation with
Implicit Feedback [0.5584060970507505]
We focus on making recommendations for a new group of users whose preferences are unknown, but we are given the decisions/choices of other groups.
Given a set of groups and their observed decisions, group decision prediction intends to predict the decision of a new group of users.
reverse social choice aims to infer the preferences of those users involved in observed group decisions.
arXiv Detail & Related papers (2021-03-13T02:05:26Z) - Contrastive Explanations for Model Interpretability [77.92370750072831]
We propose a methodology to produce contrastive explanations for classification models.
Our method is based on projecting model representation to a latent space.
Our findings shed light on the ability of label-contrastive explanations to provide a more accurate and finer-grained interpretability of a model's decision.
arXiv Detail & Related papers (2021-03-02T00:36:45Z) - Overcoming Data Sparsity in Group Recommendation [52.00998276970403]
Group recommender systems should be able to accurately learn not only users' personal preferences but also preference aggregation strategy.
In this paper, we take Bipartite Graphding Model (BGEM), the self-attention mechanism and Graph Convolutional Networks (GCNs) as basic building blocks to learn group and user representations in a unified way.
arXiv Detail & Related papers (2020-10-02T07:11:19Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z) - Algorithmic Recourse: from Counterfactual Explanations to Interventions [16.9979815165902]
We argue that counterfactual explanations inform an individual where they need to get to, but not how to get there.
Instead, we propose a shift of paradigm from recourse via nearest counterfactual explanations to recourse through minimal interventions.
arXiv Detail & Related papers (2020-02-14T22:49:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.