Discrimination in machine learning algorithms
- URL: http://arxiv.org/abs/2207.00108v1
- Date: Thu, 30 Jun 2022 21:35:42 GMT
- Title: Discrimination in machine learning algorithms
- Authors: Roberta Pappad\`a and Francesco Pauli
- Abstract summary: Machine learning algorithms are routinely used for business decisions that may directly affect individuals, for example, because a credit scoring algorithm refuses them a loan.
It is then relevant from an ethical (and legal) point of view to ensure that these algorithms do not discriminate based on sensitive attributes (like sex or race), which may occur unwittingly and unknowingly by the operator and the management.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Machine learning algorithms are routinely used for business decisions that
may directly affect individuals, for example, because a credit scoring
algorithm refuses them a loan. It is then relevant from an ethical (and legal)
point of view to ensure that these algorithms do not discriminate based on
sensitive attributes (like sex or race), which may occur unwittingly and
unknowingly by the operator and the management. Statistical tools and methods
are then required to detect and eliminate such potential biases.
Related papers
- Algorithms, Incentives, and Democracy [0.0]
We show how optimal classification by an algorithm designer can affect the distribution of behavior in a population.
We then look at the effect of democratizing the rewards and punishments, or stakes, to the algorithmic classification to consider how a society can potentially stem (or facilitate!) predatory classification.
arXiv Detail & Related papers (2023-07-05T14:22:01Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Stochastic Differentially Private and Fair Learning [7.971065005161566]
We provide the first differentially private algorithm for fair learning that is guaranteed to converge.
Our framework is flexible enough to permit different fairness, including demographic parity and equalized odds.
Our algorithm can be applied to non-binary classification tasks with multiple (non-binary) sensitive attributes.
arXiv Detail & Related papers (2022-10-17T06:54:57Z) - Evaluating Machine Unlearning via Epistemic Uncertainty [78.27542864367821]
This work presents an evaluation of Machine Unlearning algorithms based on uncertainty.
This is the first definition of a general evaluation of our best knowledge.
arXiv Detail & Related papers (2022-08-23T09:37:31Z) - Selective Credit Assignment [57.41789233550586]
We describe a unified view on temporal-difference algorithms for selective credit assignment.
We present insights into applying weightings to value-based learning and planning algorithms.
arXiv Detail & Related papers (2022-02-20T00:07:57Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Coping with Mistreatment in Fair Algorithms [1.2183405753834557]
We study the algorithmic fairness in a supervised learning setting and examine the effect of optimizing a classifier for the Equal Opportunity metric.
We propose a conceptually simple method to mitigate this bias.
We rigorously analyze the proposed method and evaluate it on several real world datasets demonstrating its efficacy.
arXiv Detail & Related papers (2021-02-22T03:26:06Z) - FairCVtest Demo: Understanding Bias in Multimodal Learning with a
Testbed in Fair Automatic Recruitment [79.23531577235887]
This demo shows the capacity of the Artificial Intelligence (AI) behind a recruitment tool to extract sensitive information from unstructured data.
Aditionally, the demo includes a new algorithm for discrimination-aware learning which eliminates sensitive information in our multimodal AI framework.
arXiv Detail & Related papers (2020-09-12T17:45:09Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z) - State of the Art in Fair ML: From Moral Philosophy and Legislation to Fair Classifiers [1.3300455020806103]
Machine learning is becoming an ever present part in our lives as many decisions are made by machine learning algorithms.
Decisions are often unfair and discriminating protected groups based on race or gender.
This work aims to give an introduction into discrimination, legislative foundations counter it and strategies to detect and prevent machine learning algorithms from such behavior.
arXiv Detail & Related papers (2018-11-20T12:03:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.