Algorithmic Fairness and Vertical Equity: Income Fairness with IRS Tax
Audit Models
- URL: http://arxiv.org/abs/2206.09875v1
- Date: Mon, 20 Jun 2022 16:27:06 GMT
- Title: Algorithmic Fairness and Vertical Equity: Income Fairness with IRS Tax
Audit Models
- Authors: Emily Black, Hadi Elzayn, Alexandra Chouldechova, Jacob Goldin, Daniel
E. Ho
- Abstract summary: This study examines issues of algorithmic fairness in the context of systems that inform tax audit selection by the IRS.
We show how the use of more flexible machine learning methods for selecting audits may affect vertical equity.
Our results have implications for the design of algorithmic tools across the public sector.
- Score: 73.24381010980606
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study examines issues of algorithmic fairness in the context of systems
that inform tax audit selection by the United States Internal Revenue Service
(IRS). While the field of algorithmic fairness has developed primarily around
notions of treating like individuals alike, we instead explore the concept of
vertical equity -- appropriately accounting for relevant differences across
individuals -- which is a central component of fairness in many public policy
settings. Applied to the design of the U.S. individual income tax system,
vertical equity relates to the fair allocation of tax and enforcement burdens
across taxpayers of different income levels. Through a unique collaboration
with the Treasury Department and IRS, we use access to anonymized individual
taxpayer microdata, risk-selected audits, and random audits from 2010-14 to
study vertical equity in tax administration. In particular, we assess how the
use of modern machine learning methods for selecting audits may affect vertical
equity. First, we show how the use of more flexible machine learning
(classification) methods -- as opposed to simpler models -- shifts audit
burdens from high to middle-income taxpayers. Second, we show that while
existing algorithmic fairness techniques can mitigate some disparities across
income, they can incur a steep cost to performance. Third, we show that the
choice of whether to treat risk of underreporting as a classification or
regression problem is highly consequential. Moving from classification to
regression models to predict underreporting shifts audit burden substantially
toward high income individuals, while increasing revenue. Last, we explore the
role of differential audit cost in shaping the audit distribution. We show that
a narrow focus on return-on-investment can undermine vertical equity. Our
results have implications for the design of algorithmic tools across the public
sector.
Related papers
- Auditing for Bias in Ad Delivery Using Inferred Demographic Attributes [50.37313459134418]
We study the effects of inference error on auditing for bias in one prominent application: black-box audit of ad delivery using paired ads.
We propose a way to mitigate the inference error when evaluating skew in ad delivery algorithms.
arXiv Detail & Related papers (2024-10-30T18:57:03Z) - A Taxation Perspective for Fair Re-ranking [61.946428892727795]
We introduce a new fair re-ranking method named Tax-rank, which levies taxes based on the difference in utility between two items.
Our model Tax-rank offers a superior tax policy for fair re-ranking, theoretically demonstrating both continuity and controllability over accuracy loss.
arXiv Detail & Related papers (2024-04-27T08:21:29Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Error Parity Fairness: Testing for Group Fairness in Regression Tasks [5.076419064097733]
This work presents error parity as a regression fairness notion and introduces a testing methodology to assess group fairness.
It is followed by a suitable permutation test to compare groups on several statistics to explore disparities and identify impacted groups.
Overall, the proposed regression fairness testing methodology fills a gap in the fair machine learning literature and may serve as a part of larger accountability assessments and algorithm audits.
arXiv Detail & Related papers (2022-08-16T17:47:20Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Understanding Unfairness in Fraud Detection through Model and Data Bias
Interactions [4.159343412286401]
We argue that algorithmic unfairness stems from interactions between models and biases in the data.
We study a set of hypotheses regarding the fairness-accuracy trade-offs that fairness-blind ML algorithms exhibit under different data bias settings.
arXiv Detail & Related papers (2022-07-13T15:18:30Z) - Integrating Reward Maximization and Population Estimation: Sequential
Decision-Making for Internal Revenue Service Audit Selection [2.2182596728059116]
We introduce a new setting, optimize-and-estimate structured bandits.
This setting is inherent to many public and private sector applications.
We demonstrate its importance on real data from the United States Internal Revenue Service.
arXiv Detail & Related papers (2022-04-25T18:28:55Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Non-Comparative Fairness for Human-Auditing and Its Relation to
Traditional Fairness Notions [1.8275108630751837]
This paper proposes a new fairness notion based on the principle of non-comparative justice.
We show that any MLS can be deemed fair from the perspective of comparative fairness.
We also show that the converse holds true in the context of individual fairness.
arXiv Detail & Related papers (2021-06-29T20:05:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.