Bias in Multimodal AI: Testbed for Fair Automatic Recruitment
- URL: http://arxiv.org/abs/2004.07173v1
- Date: Wed, 15 Apr 2020 15:58:05 GMT
- Title: Bias in Multimodal AI: Testbed for Fair Automatic Recruitment
- Authors: Alejandro Pe\~na, Ignacio Serna, Aythami Morales, and Julian Fierrez
- Abstract summary: We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
- Score: 73.85525896663371
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The presence of decision-making algorithms in society is rapidly increasing
nowadays, while concerns about their transparency and the possibility of these
algorithms becoming new sources of discrimination are arising. In fact, many
relevant automated systems have been shown to make decisions based on sensitive
information or discriminate certain social groups (e.g. certain biometric
systems for person recognition). With the aim of studying how current
multimodal algorithms based on heterogeneous sources of information are
affected by sensitive elements and inner biases in the data, we propose a
fictitious automated recruitment testbed: FairCVtest. We train automatic
recruitment algorithms using a set of multimodal synthetic profiles consciously
scored with gender and racial biases. FairCVtest shows the capacity of the
Artificial Intelligence (AI) behind such recruitment tool to extract sensitive
information from unstructured data, and exploit it in combination to data
biases in undesirable (unfair) ways. Finally, we present a list of recent works
developing techniques capable of removing sensitive information from the
decision-making process of deep learning architectures. We have used one of
these algorithms (SensitiveNets) to experiment discrimination-aware learning
for the elimination of sensitive information in our multimodal AI framework.
Our methodology and results show how to generate fairer AI-based tools in
general, and in particular fairer automated recruitment systems.
Related papers
- Dynamically Masked Discriminator for Generative Adversarial Networks [71.33631511762782]
Training Generative Adversarial Networks (GANs) remains a challenging problem.
Discriminator trains the generator by learning the distribution of real/generated data.
We propose a novel method for GANs from the viewpoint of online continual learning.
arXiv Detail & Related papers (2023-06-13T12:07:01Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Human-in-the-Loop Disinformation Detection: Stance, Sentiment, or
Something Else? [93.91375268580806]
Both politics and pandemics have recently provided ample motivation for the development of machine learning-enabled disinformation (a.k.a. fake news) detection algorithms.
Existing literature has focused primarily on the fully-automated case, but the resulting techniques cannot reliably detect disinformation on the varied topics, sources, and time scales required for military applications.
By leveraging an already-available analyst as a human-in-the-loop, canonical machine learning techniques of sentiment analysis, aspect-based sentiment analysis, and stance detection become plausible methods to use for a partially-automated disinformation detection system.
arXiv Detail & Related papers (2021-11-09T13:30:34Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Fair Representation Learning for Heterogeneous Information Networks [35.80367469624887]
We propose a comprehensive set of de-biasing methods for fair HINs representation learning.
We study the behavior of these algorithms, especially their capability in balancing the trade-off between fairness and prediction accuracy.
We evaluate the performance of the proposed methods in an automated career counseling application.
arXiv Detail & Related papers (2021-04-18T08:28:18Z) - Detecting discriminatory risk through data annotation based on Bayesian
inferences [5.017973966200985]
We propose a method of data annotation that aims to warn about the risk of discriminatory results of a given data set.
We empirically test our system on three datasets commonly accessed by the machine learning community.
arXiv Detail & Related papers (2021-01-27T12:43:42Z) - Investigating the Robustness of Artificial Intelligent Algorithms with
Mixture Experiments [1.877815076482061]
robustness of AI algorithms is of great interest as inaccurate prediction could result in safety concerns and limit the adoption of AI systems.
A robust classification algorithm is expected to have high accuracy and low variability under different application scenarios.
We conduct a comprehensive set of mixture experiments to collect prediction performance results.
Then statistical analyses are conducted to understand how various factors affect the robustness of AI classification algorithms.
arXiv Detail & Related papers (2020-10-10T15:38:53Z) - FairCVtest Demo: Understanding Bias in Multimodal Learning with a
Testbed in Fair Automatic Recruitment [79.23531577235887]
This demo shows the capacity of the Artificial Intelligence (AI) behind a recruitment tool to extract sensitive information from unstructured data.
Aditionally, the demo includes a new algorithm for discrimination-aware learning which eliminates sensitive information in our multimodal AI framework.
arXiv Detail & Related papers (2020-09-12T17:45:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.