A Normative approach to Attest Digital Discrimination
- URL: http://arxiv.org/abs/2007.07092v2
- Date: Mon, 3 Aug 2020 11:39:18 GMT
- Title: A Normative approach to Attest Digital Discrimination
- Authors: Natalia Criado, Xavier Ferrer, Jose M. Such
- Abstract summary: Examples include low-income neighbourhood's targeted with high-interest loans or low credit scores, and women being undervalued by 21% in online marketing.
We use norms as an abstraction to represent different situations that may lead to digital discrimination.
In particular, we formalise non-discrimination norms in the context of ML systems and propose an algorithm to check whether ML systems violate these norms.
- Score: 6.372554934045607
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Digital discrimination is a form of discrimination whereby users are
automatically treated unfairly, unethically or just differently based on their
personal data by a machine learning (ML) system. Examples of digital
discrimination include low-income neighbourhood's targeted with high-interest
loans or low credit scores, and women being undervalued by 21% in online
marketing. Recently, different techniques and tools have been proposed to
detect biases that may lead to digital discrimination. These tools often
require technical expertise to be executed and for their results to be
interpreted. To allow non-technical users to benefit from ML, simpler notions
and concepts to represent and reason about digital discrimination are needed.
In this paper, we use norms as an abstraction to represent different situations
that may lead to digital discrimination. In particular, we formalise
non-discrimination norms in the context of ML systems and propose an algorithm
to check whether ML systems violate these norms.
Related papers
- Hacking a surrogate model approach to XAI [49.1574468325115]
We show that even if a discriminated subgroup does not get a positive decision from the black box ADM system, the corresponding question of group membership can be pushed down onto a level as low as wanted.
Our approach can be generalized easily to other surrogate models.
arXiv Detail & Related papers (2024-06-24T13:18:02Z) - Auditing for Racial Discrimination in the Delivery of Education Ads [50.37313459134418]
We propose a new third-party auditing method that can evaluate racial bias in the delivery of ads for education opportunities.
We find evidence of racial discrimination in Meta's algorithmic delivery of ads for education opportunities, posing legal and ethical concerns.
arXiv Detail & Related papers (2024-06-02T02:00:55Z) - Why Fair Automated Hiring Systems Breach EU Non-Discrimination Law [0.0]
Employment selection processes that use automated hiring systems based on machine learning are becoming increasingly commonplace.
Algorithmic fairness and algorithmic non-discrimination are not the same.
This article examines a conflict between the two: whether such hiring systems are compliant with EU non-discrimination law.
arXiv Detail & Related papers (2023-11-07T11:31:00Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Multi-dimensional discrimination in Law and Machine Learning -- A
comparative overview [14.650860450187793]
Domain of fairness-aware machine learning focuses on methods and algorithms for understanding, mitigating, and accounting for bias in AI/ML models.
In reality, human identities are multi-dimensional, and discrimination can occur based on more than one protected characteristic.
Recent approaches in this direction mainly follow the so-called intersectional fairness definition from the legal domain.
arXiv Detail & Related papers (2023-02-12T20:41:58Z) - Reusing the Task-specific Classifier as a Discriminator:
Discriminator-free Adversarial Domain Adaptation [55.27563366506407]
We introduce a discriminator-free adversarial learning network (DALN) for unsupervised domain adaptation (UDA)
DALN achieves explicit domain alignment and category distinguishment through a unified objective.
DALN compares favorably against the existing state-of-the-art (SOTA) methods on a variety of public datasets.
arXiv Detail & Related papers (2022-04-08T04:40:18Z) - Context-Aware Discrimination Detection in Job Vacancies using
Computational Language Models [0.0]
Discriminatory job vacancies are disapproved worldwide, but remain persistent.
Discriminatory job vacancies can be explicit by directly referring to demographic memberships of candidates.
implicit forms of discrimination are also present that may not always be illegal but still influence the diversity of applicants.
arXiv Detail & Related papers (2022-02-02T09:25:08Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Why Fairness Cannot Be Automated: Bridging the Gap Between EU
Non-Discrimination Law and AI [10.281644134255576]
Article identifies a critical incompatibility between European notions of discrimination and existing statistical measures of fairness.
We show how the legal protection offered by non-discrimination law is challenged when AI, not humans, discriminate.
We propose "conditional demographic disparity" (CDD) as a standard baseline statistical measurement.
arXiv Detail & Related papers (2020-05-12T16:30:12Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z) - DeBayes: a Bayesian Method for Debiasing Network Embeddings [16.588468396705366]
We propose DeBayes: a conceptually elegant Bayesian method that is capable of learning debiased embeddings by using a biased prior.
Our experiments show that these representations can then be used to perform link prediction that is significantly more fair in terms of popular metrics.
arXiv Detail & Related papers (2020-02-26T12:57:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.