SCALES: From Fairness Principles to Constrained Decision-Making
- URL: http://arxiv.org/abs/2209.10860v1
- Date: Thu, 22 Sep 2022 08:44:36 GMT
- Title: SCALES: From Fairness Principles to Constrained Decision-Making
- Authors: Sreejith Balakrishnan, Jianxin Bi, Harold Soh
- Abstract summary: We show that well-known fairness principles can be encoded either as a utility component, a non-causal component, or a causal component.
We show that our framework produces fair policies that embody alternative fairness principles in single-step and sequential decision-making scenarios.
- Score: 16.906822244101445
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes SCALES, a general framework that translates
well-established fairness principles into a common representation based on the
Constraint Markov Decision Process (CMDP). With the help of causal language,
our framework can place constraints on both the procedure of decision making
(procedural fairness) as well as the outcomes resulting from decisions (outcome
fairness). Specifically, we show that well-known fairness principles can be
encoded either as a utility component, a non-causal component, or a causal
component in a SCALES-CMDP. We illustrate SCALES using a set of case studies
involving a simulated healthcare scenario and the real-world COMPAS dataset.
Experiments demonstrate that our framework produces fair policies that embody
alternative fairness principles in single-step and sequential decision-making
scenarios.
Related papers
- Auditing Fairness under Unobserved Confounding [56.61738581796362]
We show that we can still give meaningful bounds on treatment rates to high-risk individuals, even when entirely eliminating or relaxing the assumption that all relevant risk factors are observed.
This result is of immediate practical interest: we can audit unfair outcomes of existing decision-making systems in a principled manner.
arXiv Detail & Related papers (2024-03-18T21:09:06Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Domain Generalization via Rationale Invariance [70.32415695574555]
This paper offers a new perspective to ease the challenge of domain generalization, which involves maintaining robust results even in unseen environments.
We propose treating the element-wise contributions to the final results as the rationale for making a decision and representing the rationale for each sample as a matrix.
Our experiments demonstrate that the proposed approach achieves competitive results across various datasets, despite its simplicity.
arXiv Detail & Related papers (2023-08-22T03:31:40Z) - ACROCPoLis: A Descriptive Framework for Making Sense of Fairness [6.4686347616068005]
We propose the ACROCPoLis framework to represent allocation processes with a modeling emphasis on fairness aspects.
The framework provides a shared vocabulary in which the factors relevant to fairness assessments for different situations and procedures are made explicit.
arXiv Detail & Related papers (2023-04-19T21:14:57Z) - Fair Off-Policy Learning from Observational Data [30.77874108094485]
We propose a novel framework for fair off-policy learning.
We first formalize different fairness notions for off-policy learning.
We then propose a neural network-based framework to learn optimal policies under different fairness notions.
arXiv Detail & Related papers (2023-03-15T10:47:48Z) - Group Fairness in Prediction-Based Decision Making: From Moral
Assessment to Implementation [0.0]
We introduce a framework for the moral assessment of what fairness means in a given context.
We map the assessment's results to established statistical group fairness criteria.
We extend the FEC principle to cover all types of group fairness criteria.
arXiv Detail & Related papers (2022-10-19T10:44:21Z) - Relational Proxies: Emergent Relationships as Fine-Grained
Discriminators [52.17542855760418]
We propose a novel approach that leverages information between the global and local part of an object for encoding its label.
We design Proxies based on our theoretical findings and evaluate it on seven challenging fine-grained benchmark datasets.
We also experimentally validate our theory and obtain consistent results across multiple benchmarks.
arXiv Detail & Related papers (2022-10-05T11:08:04Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - A Justice-Based Framework for the Analysis of Algorithmic
Fairness-Utility Trade-Offs [0.0]
In prediction-based decision-making systems, different perspectives can be at odds.
The short-term business goals of the decision makers are often in conflict with the decision subjects' wish to be treated fairly.
We propose a framework to make these value-laden choices clearly visible.
arXiv Detail & Related papers (2022-06-06T20:31:55Z) - Attributing Fair Decisions with Attention Interventions [28.968122909973975]
We design an attention-based model that can be leveraged as an attribution framework.
It can identify features responsible for both performance and fairness of the model through attention interventions and attention weight manipulation.
We then design a post-processing bias mitigation strategy and compare it with a suite of baselines.
arXiv Detail & Related papers (2021-09-08T22:28:44Z) - Beyond Individual and Group Fairness [90.4666341812857]
We present a new data-driven model of fairness that is guided by the unfairness complaints received by the system.
Our model supports multiple fairness criteria and takes into account their potential incompatibilities.
arXiv Detail & Related papers (2020-08-21T14:14:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.