Fair Machine Learning Under Partial Compliance
- URL: http://arxiv.org/abs/2011.03654v4
- Date: Tue, 27 Sep 2022 02:59:17 GMT
- Title: Fair Machine Learning Under Partial Compliance
- Authors: Jessica Dai, Sina Fazelpour, Zachary C. Lipton
- Abstract summary: We propose a simple model of an employment market, leveraging simulation as a tool to explore the impact of both interaction effects and incentive effects on outcomes and auditing metrics.
Our key findings are that at equilibrium, partial compliance (k% of employers) can result in far less than proportional (k%) progress towards the full compliance outcomes.
- Score: 22.119168255562897
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Typically, fair machine learning research focuses on a single decisionmaker
and assumes that the underlying population is stationary. However, many of the
critical domains motivating this work are characterized by competitive
marketplaces with many decisionmakers. Realistically, we might expect only a
subset of them to adopt any non-compulsory fairness-conscious policy, a
situation that political philosophers call partial compliance. This possibility
raises important questions: how does the strategic behavior of decision
subjects in partial compliance settings affect the allocation outcomes? If k%
of employers were to voluntarily adopt a fairness-promoting intervention,
should we expect k% progress (in aggregate) towards the benefits of universal
adoption, or will the dynamics of partial compliance wash out the hoped-for
benefits? How might adopting a global (versus local) perspective impact the
conclusions of an auditor? In this paper, we propose a simple model of an
employment market, leveraging simulation as a tool to explore the impact of
both interaction effects and incentive effects on outcomes and auditing
metrics. Our key findings are that at equilibrium: (1) partial compliance (k%
of employers) can result in far less than proportional (k%) progress towards
the full compliance outcomes; (2) the gap is more severe when fair employers
match global (vs local) statistics; (3) choices of local vs global statistics
can paint dramatically different pictures of the performance vis-a-vis fairness
desiderata of compliant versus non-compliant employers; and (4) partial
compliance to local parity measures can induce extreme segregation.
Related papers
- Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - The Unfairness of $\varepsilon$-Fairness [0.0]
We show that if the concept of $varepsilon$-fairness is employed, it can possibly lead to outcomes that are maximally unfair in the real-world context.
We illustrate our findings with two real-world examples: college admissions and credit risk assessment.
arXiv Detail & Related papers (2024-05-15T14:13:35Z) - Reduced-Rank Multi-objective Policy Learning and Optimization [57.978477569678844]
In practice, causal researchers do not have a single outcome in mind a priori.
In government-assisted social benefit programs, policymakers collect many outcomes to understand the multidimensional nature of poverty.
We present a data-driven dimensionality-reduction methodology for multiple outcomes in the context of optimal policy learning.
arXiv Detail & Related papers (2024-04-29T08:16:30Z) - Auditing Fairness under Unobserved Confounding [56.61738581796362]
We show that we can still give meaningful bounds on treatment rates to high-risk individuals, even when entirely eliminating or relaxing the assumption that all relevant risk factors are observed.
This result is of immediate practical interest: we can audit unfair outcomes of existing decision-making systems in a principled manner.
arXiv Detail & Related papers (2024-03-18T21:09:06Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - When mitigating bias is unfair: multiplicity and arbitrariness in algorithmic group fairness [8.367620276482056]
We introduce the FRAME (FaiRness Arbitrariness and Multiplicity Evaluation) framework, which evaluates bias mitigation through five dimensions.
Applying FRAME to various bias mitigation approaches across key datasets allows us to exhibit significant differences in the behaviors of debiasing methods.
These findings highlight the limitations of current fairness criteria and the inherent arbitrariness in the debiasing process.
arXiv Detail & Related papers (2023-02-14T16:53:52Z) - Fairness in Contextual Resource Allocation Systems: Metrics and
Incompatibility Results [7.705334602362225]
We study systems that allocate scarce resources to satisfy basic needs, such as homeless services that provide housing.
These systems often support communities disproportionately affected by systemic racial, gender, or other injustices.
We propose a framework for evaluating fairness in contextual resource allocation systems inspired by fairness metrics in machine learning.
arXiv Detail & Related papers (2022-12-04T02:30:58Z) - Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome
Homogenization? [90.35044668396591]
A recurring theme in machine learning is algorithmic monoculture: the same systems, or systems that share components, are deployed by multiple decision-makers.
We propose the component-sharing hypothesis: if decision-makers share components like training data or specific models, then they will produce more homogeneous outcomes.
We test this hypothesis on algorithmic fairness benchmarks, demonstrating that sharing training data reliably exacerbates homogenization.
We conclude with philosophical analyses of and societal challenges for outcome homogenization, with an eye towards implications for deployed machine learning systems.
arXiv Detail & Related papers (2022-11-25T09:33:11Z) - Measuring and signing fairness as performance under multiple stakeholder
distributions [39.54243229669015]
Best tools for measuring the fairness of learning systems are rigid fairness metrics encapsulated as mathematical one-liners.
We propose to shift focus from shaping fairness metrics to curating the distributions of examples under which these are computed.
We provide full implementation guidelines for stress testing, illustrate both the benefits and shortcomings of this framework.
arXiv Detail & Related papers (2022-07-20T15:10:02Z) - Understanding Instance-Level Impact of Fairness Constraints [12.866655972682254]
We study the influence of training examples when fairness constraints are imposed.
We find that training on a subset of weighty data examples leads to lower fairness violations with a trade-off of accuracy.
arXiv Detail & Related papers (2022-06-30T17:31:33Z) - Equality before the Law: Legal Judgment Consistency Analysis for
Fairness [55.91612739713396]
In this paper, we propose an evaluation metric for judgment inconsistency, Legal Inconsistency Coefficient (LInCo)
We simulate judges from different groups with legal judgment prediction (LJP) models and measure the judicial inconsistency with the disagreement of the judgment results given by LJP models trained on different groups.
We employ LInCo to explore the inconsistency in real cases and come to the following observations: (1) Both regional and gender inconsistency exist in the legal system, but gender inconsistency is much less than regional inconsistency.
arXiv Detail & Related papers (2021-03-25T14:28:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.