No computation without representation: Avoiding data and algorithm
biases through diversity
- URL: http://arxiv.org/abs/2002.11836v1
- Date: Wed, 26 Feb 2020 23:07:39 GMT
- Title: No computation without representation: Avoiding data and algorithm
biases through diversity
- Authors: Caitlin Kuhlman, Latifa Jackson, Rumi Chunara
- Abstract summary: We draw connections between the lack of diversity within academic and professional computing fields and the type and breadth of the biases encountered in datasets.
We use these lessons to develop recommendations that provide concrete steps for the computing community to increase diversity.
- Score: 11.12971845021808
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The emergence and growth of research on issues of ethics in AI, and in
particular algorithmic fairness, has roots in an essential observation that
structural inequalities in society are reflected in the data used to train
predictive models and in the design of objective functions. While research
aiming to mitigate these issues is inherently interdisciplinary, the design of
unbiased algorithms and fair socio-technical systems are key desired outcomes
which depend on practitioners from the fields of data science and computing.
However, these computing fields broadly also suffer from the same
under-representation issues that are found in the datasets we analyze. This
disconnect affects the design of both the desired outcomes and metrics by which
we measure success. If the ethical AI research community accepts this, we
tacitly endorse the status quo and contradict the goals of non-discrimination
and equity which work on algorithmic fairness, accountability, and transparency
seeks to address. Therefore, we advocate in this work for diversifying
computing as a core priority of the field and our efforts to achieve ethical AI
practices. We draw connections between the lack of diversity within academic
and professional computing fields and the type and breadth of the biases
encountered in datasets, machine learning models, problem formulations, and
interpretation of results. Examining the current fairness/ethics in AI
literature, we highlight cases where this lack of diverse perspectives has been
foundational to the inequity in treatment of underrepresented and protected
group data. We also look to other professional communities, such as in law and
health, where disparities have been reduced both in the educational diversity
of trainees and among professional practices. We use these lessons to develop
recommendations that provide concrete steps for the computing community to
increase diversity.
Related papers
- FairAIED: Navigating Fairness, Bias, and Ethics in Educational AI Applications [2.612585751318055]
The integration of Artificial Intelligence into education has transformative potential, providing tailored learning experiences and creative instructional approaches.
However, the inherent biases in AI algorithms hinder this improvement by unintentionally perpetuating prejudice against specific demographics.
This survey delves deeply into the developing topic of algorithmic fairness in educational contexts.
It identifies the common forms of biases, such as data-related, algorithmic, and user-interaction, that fundamentally undermine the accomplishment of fairness in AI teaching aids.
arXiv Detail & Related papers (2024-07-26T13:59:20Z) - Quantifying the Cross-sectoral Intersecting Discrepancies within Multiple Groups Using Latent Class Analysis Towards Fairness [6.683051393349788]
This research introduces an innovative approach to quantify cross-sectoral intersecting discrepancies.
We validate our approach using both proprietary and public datasets.
Our findings reveal significant discrepancies between minority ethnic groups, highlighting the need for targeted interventions in real-world AI applications.
arXiv Detail & Related papers (2024-05-24T08:10:31Z) - Reconciling Predictive and Statistical Parity: A Causal Approach [68.59381759875734]
We propose a new causal decomposition formula for the fairness measures associated with predictive parity.
We show that the notions of statistical and predictive parity are not really mutually exclusive, but complementary and spanning a spectrum of fairness notions.
arXiv Detail & Related papers (2023-06-08T09:23:22Z) - Fairness meets Cross-Domain Learning: a new perspective on Models and
Metrics [80.07271410743806]
We study the relationship between cross-domain learning (CD) and model fairness.
We introduce a benchmark on face and medical images spanning several demographic groups as well as classification and localization tasks.
Our study covers 14 CD approaches alongside three state-of-the-art fairness algorithms and shows how the former can outperform the latter.
arXiv Detail & Related papers (2023-03-25T09:34:05Z) - A toolkit of dilemmas: Beyond debiasing and fairness formulas for
responsible AI/ML [0.0]
Approaches to fair and ethical AI have recently fallen under the scrutiny of the emerging field of critical data studies.
This paper advocates for a situated reasoning and creative engagement with the dilemmas surrounding responsible algorithmic/data-driven systems.
arXiv Detail & Related papers (2023-03-03T13:58:24Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - A Field Guide to Federated Optimization [161.3779046812383]
Federated learning and analytics are a distributed approach for collaboratively learning models (or statistics) from decentralized data.
This paper provides recommendations and guidelines on formulating, designing, evaluating and analyzing federated optimization algorithms.
arXiv Detail & Related papers (2021-07-14T18:09:08Z) - Fair Representation Learning for Heterogeneous Information Networks [35.80367469624887]
We propose a comprehensive set of de-biasing methods for fair HINs representation learning.
We study the behavior of these algorithms, especially their capability in balancing the trade-off between fairness and prediction accuracy.
We evaluate the performance of the proposed methods in an automated career counseling application.
arXiv Detail & Related papers (2021-04-18T08:28:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.