Fair Machine Learning Under Partial Compliance
- URL: http://arxiv.org/abs/2011.03654v4
- Date: Tue, 27 Sep 2022 02:59:17 GMT
- Title: Fair Machine Learning Under Partial Compliance
- Authors: Jessica Dai, Sina Fazelpour, Zachary C. Lipton
- Abstract summary: We propose a simple model of an employment market, leveraging simulation as a tool to explore the impact of both interaction effects and incentive effects on outcomes and auditing metrics.
Our key findings are that at equilibrium, partial compliance (k% of employers) can result in far less than proportional (k%) progress towards the full compliance outcomes.
- Score: 22.119168255562897
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Typically, fair machine learning research focuses on a single decisionmaker
and assumes that the underlying population is stationary. However, many of the
critical domains motivating this work are characterized by competitive
marketplaces with many decisionmakers. Realistically, we might expect only a
subset of them to adopt any non-compulsory fairness-conscious policy, a
situation that political philosophers call partial compliance. This possibility
raises important questions: how does the strategic behavior of decision
subjects in partial compliance settings affect the allocation outcomes? If k%
of employers were to voluntarily adopt a fairness-promoting intervention,
should we expect k% progress (in aggregate) towards the benefits of universal
adoption, or will the dynamics of partial compliance wash out the hoped-for
benefits? How might adopting a global (versus local) perspective impact the
conclusions of an auditor? In this paper, we propose a simple model of an
employment market, leveraging simulation as a tool to explore the impact of
both interaction effects and incentive effects on outcomes and auditing
metrics. Our key findings are that at equilibrium: (1) partial compliance (k%
of employers) can result in far less than proportional (k%) progress towards
the full compliance outcomes; (2) the gap is more severe when fair employers
match global (vs local) statistics; (3) choices of local vs global statistics
can paint dramatically different pictures of the performance vis-a-vis fairness
desiderata of compliant versus non-compliant employers; and (4) partial
compliance to local parity measures can induce extreme segregation.
Related papers
- Achieving Fairness in Predictive Process Analytics via Adversarial Learning [50.31323204077591]
This paper addresses the challenge of integrating a debiasing phase into predictive business process analytics.
Our framework leverages on adversial debiasing is evaluated on four case studies, showing a significant reduction in the contribution of biased variables to the predicted value.
arXiv Detail & Related papers (2024-10-03T15:56:03Z) - It is Giving Major Satisfaction: Why Fairness Matters for Developers [9.312605205492456]
This study aims to examine how fairness perceptions relate to job satisfaction among software practitioners.
Our findings indicate that all four fairness dimensions, distributive, procedural, interpersonal, and informational, significantly affect job satisfaction.
The relationship between fairness perceptions and job satisfaction is notably stronger for female, ethnically underrepresented, less experienced practitioners, and those with work limitations.
arXiv Detail & Related papers (2024-10-03T13:40:00Z) - Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - The Unfairness of $\varepsilon$-Fairness [0.0]
We show that if the concept of $varepsilon$-fairness is employed, it can possibly lead to outcomes that are maximally unfair in the real-world context.
We illustrate our findings with two real-world examples: college admissions and credit risk assessment.
arXiv Detail & Related papers (2024-05-15T14:13:35Z) - Reduced-Rank Multi-objective Policy Learning and Optimization [57.978477569678844]
In practice, causal researchers do not have a single outcome in mind a priori.
In government-assisted social benefit programs, policymakers collect many outcomes to understand the multidimensional nature of poverty.
We present a data-driven dimensionality-reduction methodology for multiple outcomes in the context of optimal policy learning.
arXiv Detail & Related papers (2024-04-29T08:16:30Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - When mitigating bias is unfair: multiplicity and arbitrariness in algorithmic group fairness [8.367620276482056]
We introduce the FRAME (FaiRness Arbitrariness and Multiplicity Evaluation) framework, which evaluates bias mitigation through five dimensions.
Applying FRAME to various bias mitigation approaches across key datasets allows us to exhibit significant differences in the behaviors of debiasing methods.
These findings highlight the limitations of current fairness criteria and the inherent arbitrariness in the debiasing process.
arXiv Detail & Related papers (2023-02-14T16:53:52Z) - Fairness in Contextual Resource Allocation Systems: Metrics and
Incompatibility Results [7.705334602362225]
We study systems that allocate scarce resources to satisfy basic needs, such as homeless services that provide housing.
These systems often support communities disproportionately affected by systemic racial, gender, or other injustices.
We propose a framework for evaluating fairness in contextual resource allocation systems inspired by fairness metrics in machine learning.
arXiv Detail & Related papers (2022-12-04T02:30:58Z) - Measuring and signing fairness as performance under multiple stakeholder
distributions [39.54243229669015]
Best tools for measuring the fairness of learning systems are rigid fairness metrics encapsulated as mathematical one-liners.
We propose to shift focus from shaping fairness metrics to curating the distributions of examples under which these are computed.
We provide full implementation guidelines for stress testing, illustrate both the benefits and shortcomings of this framework.
arXiv Detail & Related papers (2022-07-20T15:10:02Z) - Equality before the Law: Legal Judgment Consistency Analysis for
Fairness [55.91612739713396]
In this paper, we propose an evaluation metric for judgment inconsistency, Legal Inconsistency Coefficient (LInCo)
We simulate judges from different groups with legal judgment prediction (LJP) models and measure the judicial inconsistency with the disagreement of the judgment results given by LJP models trained on different groups.
We employ LInCo to explore the inconsistency in real cases and come to the following observations: (1) Both regional and gender inconsistency exist in the legal system, but gender inconsistency is much less than regional inconsistency.
arXiv Detail & Related papers (2021-03-25T14:28:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.