Towards Fairness Certification in Artificial Intelligence
- URL: http://arxiv.org/abs/2106.02498v1
- Date: Fri, 4 Jun 2021 14:12:12 GMT
- Title: Towards Fairness Certification in Artificial Intelligence
- Authors: Tatiana Tommasi, Silvia Bucci, Barbara Caputo, Pietro Asinari
- Abstract summary: We propose a first joint effort to define the operational steps needed for AI fairness certification.
We will overview the criteria that should be met by an AI system before coming into official service and the conformity assessment procedures useful to monitor its functioning for fair decisions.
- Score: 31.920661197618195
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Thanks to the great progress of machine learning in the last years, several
Artificial Intelligence (AI) techniques have been increasingly moving from the
controlled research laboratory settings to our everyday life. AI is clearly
supportive in many decision-making scenarios, but when it comes to sensitive
areas such as health care, hiring policies, education, banking or justice, with
major impact on individuals and society, it becomes crucial to establish
guidelines on how to design, develop, deploy and monitor this technology.
Indeed the decision rules elaborated by machine learning models are data-driven
and there are multiple ways in which discriminatory biases can seep into data.
Algorithms trained on those data incur the risk of amplifying prejudices and
societal stereotypes by over associating protected attributes such as gender,
ethnicity or disabilities with the prediction task. Starting from the extensive
experience of the National Metrology Institute on measurement standards and
certification roadmaps, and of Politecnico di Torino on machine learning as
well as methods for domain bias evaluation and mastering, we propose a first
joint effort to define the operational steps needed for AI fairness
certification. Specifically we will overview the criteria that should be met by
an AI system before coming into official service and the conformity assessment
procedures useful to monitor its functioning for fair decisions.
Related papers
- Guideline for Trustworthy Artificial Intelligence -- AI Assessment
Catalog [0.0]
It is clear that AI and business models based on it can only reach their full potential if AI applications are developed according to high quality standards.
The issue of the trustworthiness of AI applications is crucial and is the subject of numerous major publications.
This AI assessment catalog addresses exactly this point and is intended for two target groups.
arXiv Detail & Related papers (2023-06-20T08:07:18Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Enabling Automated Machine Learning for Model-Driven AI Engineering [60.09869520679979]
We propose a novel approach to enable Model-Driven Software Engineering and Model-Driven AI Engineering.
In particular, we support Automated ML, thus assisting software engineers without deep AI knowledge in developing AI-intensive systems.
arXiv Detail & Related papers (2022-03-06T10:12:56Z) - Certifiable Artificial Intelligence Through Data Fusion [7.103626867766158]
This paper reviews and proposes concerns in adopting, fielding, and maintaining artificial intelligence (AI) systems.
A notional use case is presented with image data fusion to support AI object recognition certifiability considering precision versus distance.
arXiv Detail & Related papers (2021-11-03T03:34:19Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Trusted Artificial Intelligence: Towards Certification of Machine
Learning Applications [5.7576910363986]
The T"UV AUSTRIA Group in cooperation with the Institute for Machine Learning at the Johannes Kepler University Linz proposes a certification process.
The holistic approach attempts to evaluate and verify the aspects of secure software development, functional requirements, data quality, data protection, and ethics.
The audit catalog can be applied to low-risk applications within the scope of supervised learning.
arXiv Detail & Related papers (2021-03-31T08:59:55Z) - Decision Rule Elicitation for Domain Adaptation [93.02675868486932]
Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels from experts.
In this work, we allow experts to additionally produce decision rules describing their decision-making.
We show that decision rule elicitation improves domain adaptation of the algorithm and helps to propagate expert's knowledge to the AI model.
arXiv Detail & Related papers (2021-02-23T08:07:22Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z) - Bias in Data-driven AI Systems -- An Introductory Survey [37.34717604783343]
This survey focuses on data-driven AI, as a large part of AI is powered nowadays by (big) data and powerful Machine Learning (ML) algorithms.
If otherwise not specified, we use the general term bias to describe problems related to the gathering or processing of data that might result in prejudiced decisions on the bases of demographic features like race, sex, etc.
arXiv Detail & Related papers (2020-01-14T09:39:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.