Peer-induced Fairness: A Causal Approach for Algorithmic Fairness Auditing
- URL: http://arxiv.org/abs/2408.02558v4
- Date: Thu, 5 Sep 2024 23:38:21 GMT
- Title: Peer-induced Fairness: A Causal Approach for Algorithmic Fairness Auditing
- Authors: Shiqi Fang, Zexun Chen, Jake Ansell,
- Abstract summary: The European Union's Artificial Intelligence Act takes effect on 1 August 2024.
High-risk AI applications must adhere to stringent transparency and fairness standards.
We propose a novel framework, which combines the strengths of counterfactual fairness and peer comparison strategy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the European Union's Artificial Intelligence Act taking effect on 1 August 2024, high-risk AI applications must adhere to stringent transparency and fairness standards. This paper addresses a crucial question: how can we scientifically audit algorithmic fairness? Current methods typically remain at the basic detection stage of auditing, without accounting for more complex scenarios. We propose a novel framework, ``peer-induced fairness'', which combines the strengths of counterfactual fairness and peer comparison strategy, creating a reliable and robust tool for auditing algorithmic fairness. Our framework is universal, adaptable to various domains, and capable of handling different levels of data quality, including skewed distributions. Moreover, it can distinguish whether adverse decisions result from algorithmic discrimination or inherent limitations of the subjects, thereby enhancing transparency. This framework can serve as both a self-assessment tool for AI developers and an external assessment tool for auditors to ensure compliance with the EU AI Act. We demonstrate its utility in small and medium-sized enterprises access to finance, uncovering significant unfairness-41.51% of micro-firms face discrimination compared to non-micro firms. These findings highlight the framework's potential for broader applications in ensuring equitable AI-driven decision-making.
Related papers
- Ethical and Scalable Automation: A Governance and Compliance Framework for Business Applications [0.0]
This paper introduces a framework ensuring that AI must be ethical, controllable, viable, and desirable.
Different case studies validate this framework by integrating AI in both academic and practical environments.
arXiv Detail & Related papers (2024-09-25T12:39:28Z) - Mathematical Algorithm Design for Deep Learning under Societal and
Judicial Constraints: The Algorithmic Transparency Requirement [65.26723285209853]
We derive a framework to analyze whether a transparent implementation in a computing model is feasible.
Based on previous results, we find that Blum-Shub-Smale Machines have the potential to establish trustworthy solvers for inverse problems.
arXiv Detail & Related papers (2024-01-18T15:32:38Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Beyond Incompatibility: Trade-offs between Mutually Exclusive Fairness Criteria in Machine Learning and Law [2.959308758321417]
We present a novel algorithm (FAir Interpolation Method: FAIM) for continuously interpolating between three fairness criteria.
We demonstrate the effectiveness of our algorithm when applied to synthetic data, the COMPAS data set, and a new, real-world data set from the e-commerce sector.
arXiv Detail & Related papers (2022-12-01T12:47:54Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Algorithmic Fairness and Vertical Equity: Income Fairness with IRS Tax
Audit Models [73.24381010980606]
This study examines issues of algorithmic fairness in the context of systems that inform tax audit selection by the IRS.
We show how the use of more flexible machine learning methods for selecting audits may affect vertical equity.
Our results have implications for the design of algorithmic tools across the public sector.
arXiv Detail & Related papers (2022-06-20T16:27:06Z) - Fairness Score and Process Standardization: Framework for Fairness
Certification in Artificial Intelligence Systems [0.4297070083645048]
We propose a novel Fairness Score to measure the fairness of a data-driven AI system.
It will also provide a framework to operationalise the concept of fairness and facilitate the commercial deployment of such systems.
arXiv Detail & Related papers (2022-01-10T15:45:12Z) - Understanding Relations Between Perception of Fairness and Trust in
Algorithmic Decision Making [8.795591344648294]
We aim to understand the relationship between induced algorithmic fairness and its perception in humans.
We also study how does induced algorithmic fairness affects user trust in algorithmic decision making.
arXiv Detail & Related papers (2021-09-29T11:00:39Z) - Towards a Flexible Framework for Algorithmic Fairness [0.8379286663107844]
In recent years, many different definitions for ensuring non-discrimination in algorithmic decision systems have been put forward.
We present an algorithm that harnesses optimal transport to provide a flexible framework to interpolate between different fairness definitions.
arXiv Detail & Related papers (2020-10-15T16:06:53Z) - Beyond Individual and Group Fairness [90.4666341812857]
We present a new data-driven model of fairness that is guided by the unfairness complaints received by the system.
Our model supports multiple fairness criteria and takes into account their potential incompatibilities.
arXiv Detail & Related papers (2020-08-21T14:14:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.