Fairness Score and Process Standardization: Framework for Fairness
Certification in Artificial Intelligence Systems
- URL: http://arxiv.org/abs/2201.06952v1
- Date: Mon, 10 Jan 2022 15:45:12 GMT
- Title: Fairness Score and Process Standardization: Framework for Fairness
Certification in Artificial Intelligence Systems
- Authors: Avinash Agarwal, Harsh Agarwal, Nihaarika Agarwal
- Abstract summary: We propose a novel Fairness Score to measure the fairness of a data-driven AI system.
It will also provide a framework to operationalise the concept of fairness and facilitate the commercial deployment of such systems.
- Score: 0.4297070083645048
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decisions made by various Artificial Intelligence (AI) systems greatly
influence our day-to-day lives. With the increasing use of AI systems, it
becomes crucial to know that they are fair, identify the underlying biases in
their decision-making, and create a standardized framework to ascertain their
fairness. In this paper, we propose a novel Fairness Score to measure the
fairness of a data-driven AI system and a Standard Operating Procedure (SOP)
for issuing Fairness Certification for such systems. Fairness Score and audit
process standardization will ensure quality, reduce ambiguity, enable
comparison and improve the trustworthiness of the AI systems. It will also
provide a framework to operationalise the concept of fairness and facilitate
the commercial deployment of such systems. Furthermore, a Fairness Certificate
issued by a designated third-party auditing agency following the standardized
process would boost the conviction of the organizations in the AI systems that
they intend to deploy. The Bias Index proposed in this paper also reveals
comparative bias amongst the various protected attributes within the dataset.
To substantiate the proposed framework, we iteratively train a model on biased
and unbiased data using multiple datasets and check that the Fairness Score and
the proposed process correctly identify the biases and judge the fairness.
Related papers
- Peer-induced Fairness: A Causal Approach for Algorithmic Fairness Auditing [0.0]
The European Union's Artificial Intelligence Act takes effect on 1 August 2024.
High-risk AI applications must adhere to stringent transparency and fairness standards.
We propose a novel framework, which combines the strengths of counterfactual fairness and peer comparison strategy.
arXiv Detail & Related papers (2024-08-05T15:35:34Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Equal Confusion Fairness: Measuring Group-Based Disparities in Automated
Decision Systems [5.076419064097733]
This paper proposes a new equal confusion fairness test to check an automated decision system for fairness and a new confusion parity error to quantify the extent of any unfairness.
Overall, the methods and metrics provided here may assess automated decision systems' fairness as part of a more extensive accountability assessment.
arXiv Detail & Related papers (2023-07-02T04:44:19Z) - A Survey on Fairness-aware Recommender Systems [59.23208133653637]
We present concepts of fairness in different recommendation scenarios, comprehensively categorize current advances, and introduce typical methods to promote fairness in different stages of recommender systems.
Next, we delve into the significant influence that fairness-aware recommender systems exert on real-world industrial applications.
arXiv Detail & Related papers (2023-06-01T07:08:22Z) - Fair-CDA: Continuous and Directional Augmentation for Group Fairness [48.84385689186208]
We propose a fine-grained data augmentation strategy for imposing fairness constraints.
We show that group fairness can be achieved by regularizing the models on transition paths of sensitive features between groups.
Our proposed method does not assume any data generative model and ensures good generalization for both accuracy and fairness.
arXiv Detail & Related papers (2023-04-01T11:23:00Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Towards a Fairness-Aware Scoring System for Algorithmic Decision-Making [35.21763166288736]
We propose a general framework to create data-driven fairness-aware scoring systems.
We show that the proposed framework provides practitioners or policymakers great flexibility to select their desired fairness requirements.
arXiv Detail & Related papers (2021-09-21T09:46:35Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z) - On the Identification of Fair Auditors to Evaluate Recommender Systems
based on a Novel Non-Comparative Fairness Notion [1.116812194101501]
Decision-support systems have been found to be discriminatory in the context of many practical deployments.
We propose a new fairness notion based on the principle of non-comparative justice.
We show that the proposed fairness notion also provides guarantees in terms of comparative fairness notions.
arXiv Detail & Related papers (2020-09-09T16:04:41Z) - Beyond Individual and Group Fairness [90.4666341812857]
We present a new data-driven model of fairness that is guided by the unfairness complaints received by the system.
Our model supports multiple fairness criteria and takes into account their potential incompatibilities.
arXiv Detail & Related papers (2020-08-21T14:14:44Z) - Getting Fairness Right: Towards a Toolbox for Practitioners [2.4364387374267427]
The potential risk of AI systems unintentionally embedding and reproducing bias has attracted the attention of machine learning practitioners and society at large.
This paper proposes to draft a toolbox which helps practitioners to ensure fair AI practices.
arXiv Detail & Related papers (2020-03-15T20:53:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.