A Seven-Layer Model for Standardising AI Fairness Assessment
- URL: http://arxiv.org/abs/2212.11207v1
- Date: Wed, 21 Dec 2022 17:28:07 GMT
- Title: A Seven-Layer Model for Standardising AI Fairness Assessment
- Authors: Avinash Agarwal, Harsh Agarwal
- Abstract summary: We elaborate that the AI system is prone to biases at every stage of its lifecycle, from inception to its usage.
We propose a novel seven-layer model, inspired by the Open System Interconnection (OSI) model, to standardise AI fairness handling.
- Score: 0.5076419064097732
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Problem statement: Standardisation of AI fairness rules and benchmarks is
challenging because AI fairness and other ethical requirements depend on
multiple factors such as context, use case, type of the AI system, and so on.
In this paper, we elaborate that the AI system is prone to biases at every
stage of its lifecycle, from inception to its usage, and that all stages
require due attention for mitigating AI bias. We need a standardised approach
to handle AI fairness at every stage. Gap analysis: While AI fairness is a hot
research topic, a holistic strategy for AI fairness is generally missing. Most
researchers focus only on a few facets of AI model-building. Peer review shows
excessive focus on biases in the datasets, fairness metrics, and algorithmic
bias. In the process, other aspects affecting AI fairness get ignored. The
solution proposed: We propose a comprehensive approach in the form of a novel
seven-layer model, inspired by the Open System Interconnection (OSI) model, to
standardise AI fairness handling. Despite the differences in the various
aspects, most AI systems have similar model-building stages. The proposed model
splits the AI system lifecycle into seven abstraction layers, each
corresponding to a well-defined AI model-building or usage stage. We also
provide checklists for each layer and deliberate on potential sources of bias
in each layer and their mitigation methodologies. This work will facilitate
layer-wise standardisation of AI fairness rules and benchmarking parameters.
Related papers
- Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Why AI Is WEIRD and Should Not Be This Way: Towards AI For Everyone, With Everyone, By Everyone [47.19142377073831]
This paper presents a vision for creating AI systems that are inclusive at every stage of development.
We address key limitations in the current AI pipeline and its WEIRD representation.
arXiv Detail & Related papers (2024-10-09T10:44:26Z) - The Switch, the Ladder, and the Matrix: Models for Classifying AI Systems [0.0]
There still exists a gap between principles and practices in AI ethics.
One major obstacle organisations face when attempting to operationalise AI Ethics is the lack of a well-defined material scope.
arXiv Detail & Related papers (2024-07-07T12:16:01Z) - Mapping the Potential of Explainable AI for Fairness Along the AI Lifecycle [1.4183971140167246]
We see explainable AI (XAI) as a promising way to increase fairness in AI systems.
We distill eight fairness desiderata, map them along the AI lifecycle, and discuss how XAI could help address each of them.
arXiv Detail & Related papers (2024-04-29T14:34:43Z) - The Pursuit of Fairness in Artificial Intelligence Models: A Survey [2.124791625488617]
This survey offers a synopsis of the different ways researchers have promoted fairness in AI systems.
A thorough study is conducted of the approaches and techniques employed by researchers to mitigate bias in AI models.
We also delve into the impact of biased models on user experience and the ethical considerations to contemplate when developing and deploying such models.
arXiv Detail & Related papers (2024-03-26T02:33:36Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Fairness And Bias in Artificial Intelligence: A Brief Survey of Sources,
Impacts, And Mitigation Strategies [11.323961700172175]
This survey paper offers a succinct, comprehensive overview of fairness and bias in AI.
We review sources of bias, such as data, algorithm, and human decision biases.
We assess the societal impact of biased AI systems, focusing on the perpetuation of inequalities and the reinforcement of harmful stereotypes.
arXiv Detail & Related papers (2023-04-16T03:23:55Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Responsible AI Pattern Catalogue: A Collection of Best Practices for AI
Governance and Engineering [20.644494592443245]
We present a Responsible AI Pattern Catalogue based on the results of a Multivocal Literature Review (MLR)
Rather than staying at the principle or algorithm level, we focus on patterns that AI system stakeholders can undertake in practice to ensure that the developed AI systems are responsible throughout the entire governance and engineering lifecycle.
arXiv Detail & Related papers (2022-09-12T00:09:08Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.