Beyond Demographic Parity: Redefining Equal Treatment
- URL: http://arxiv.org/abs/2303.08040v3
- Date: Mon, 2 Oct 2023 02:06:48 GMT
- Title: Beyond Demographic Parity: Redefining Equal Treatment
- Authors: Carlos Mougan, Laura State, Antonio Ferrara, Salvatore Ruggieri,
Steffen Staab
- Abstract summary: We show the theoretical properties of our notion of equal treatment and devise a two-sample test based on the AUC of an equal treatment inspector.
We release textttexplanationspace, an open-source Python package with methods and tutorials.
- Score: 23.28973277699437
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Liberalism-oriented political philosophy reasons that all individuals should
be treated equally independently of their protected characteristics. Related
work in machine learning has translated the concept of \emph{equal treatment}
into terms of \emph{equal outcome} and measured it as \emph{demographic parity}
(also called \emph{statistical parity}). Our analysis reveals that the two
concepts of equal outcome and equal treatment diverge; therefore, demographic
parity does not faithfully represent the notion of \emph{equal treatment}. We
propose a new formalization for equal treatment by (i) considering the
influence of feature values on predictions, such as computed by Shapley values
decomposing predictions across its features, (ii) defining distributions of
explanations, and (iii) comparing explanation distributions between populations
with different protected characteristics. We show the theoretical properties of
our notion of equal treatment and devise a classifier two-sample test based on
the AUC of an equal treatment inspector. We study our formalization of equal
treatment on synthetic and natural data. We release \texttt{explanationspace},
an open-source Python package with methods and tutorials.
Related papers
- Causal Fair Machine Learning via Rank-Preserving Interventional Distributions [0.5062312533373299]
We define individuals as being normatively equal if they are equal in a fictitious, normatively desired (FiND) world.
We propose rank-preserving interventional distributions to define a specific FiND world in which this holds.
We show that our warping approach effectively identifies the most discriminated individuals and mitigates unfairness.
arXiv Detail & Related papers (2023-07-24T13:46:50Z) - Enriching Disentanglement: From Logical Definitions to Quantitative Metrics [59.12308034729482]
Disentangling the explanatory factors in complex data is a promising approach for data-efficient representation learning.
We establish relationships between logical definitions and quantitative metrics to derive theoretically grounded disentanglement metrics.
We empirically demonstrate the effectiveness of the proposed metrics by isolating different aspects of disentangled representations.
arXiv Detail & Related papers (2023-05-19T08:22:23Z) - Evaluating the Robustness of Interpretability Methods through
Explanation Invariance and Equivariance [72.50214227616728]
Interpretability methods are valuable only if their explanations faithfully describe the explained model.
We consider neural networks whose predictions are invariant under a specific symmetry group.
arXiv Detail & Related papers (2023-04-13T17:59:03Z) - Measuring Fine-Grained Semantic Equivalence with Abstract Meaning
Representation [9.666975331506812]
Identifying semantically equivalent sentences is important for many NLP tasks.
Current approaches to semantic equivalence take a loose, sentence-level approach to "equivalence"
We introduce a novel, more sensitive method of characterizing semantic equivalence that leverages Abstract Meaning Representation graph structures.
arXiv Detail & Related papers (2022-10-06T16:08:27Z) - Nonparametric Conditional Local Independence Testing [69.31200003384122]
Conditional local independence is an independence relation among continuous time processes.
No nonparametric test of conditional local independence has been available.
We propose such a nonparametric test based on double machine learning.
arXiv Detail & Related papers (2022-03-25T10:31:02Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Fair Wrapping for Black-box Predictions [105.10203274098862]
We learn a wrapper function which we define as an alpha-tree, which modifies the prediction.
We show that our modification has appealing properties in terms of composition ofalpha-trees, generalization, interpretability, and KL divergence between modified and original predictions.
arXiv Detail & Related papers (2022-01-31T01:02:39Z) - A partition-based similarity for classification distributions [11.877906044513272]
We define a measure of similarity between classification distributions that is both principled from the perspective of statistical pattern recognition and useful from the perspective of machine learning practitioners.
We propose a novel similarity on classification distributions, dubbed task similarity, that quantifies how an optimally-transformed optimal representation for a source distribution performs when applied to inference related to a target distribution.
arXiv Detail & Related papers (2020-11-12T18:21:11Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.