A Framework for Fairer Machine Learning in Organizations
- URL: http://arxiv.org/abs/2009.04661v1
- Date: Thu, 10 Sep 2020 04:07:10 GMT
- Title: A Framework for Fairer Machine Learning in Organizations
- Authors: Lily Morse, Mike H.M. Teodorescu, Yazeed Awwad, Gerald Kane
- Abstract summary: Risk of unfairness abound when human decision processes in outcomes of socio-economic importance are automated.
We reveal sources of unfair machine learning, review fairness criteria, and provide a framework which, if implemented, would enable an organization to avoid implementing an unfair machine learning model.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the increase in adoption of machine learning tools by organizations
risks of unfairness abound, especially when human decision processes in
outcomes of socio-economic importance such as hiring, housing, lending, and
admissions are automated. We reveal sources of unfair machine learning, review
fairness criteria, and provide a framework which, if implemented, would enable
an organization to both avoid implementing an unfair machine learning model,
but also to avoid the common situation that as an algorithm learns with more
data it can become unfair over time. Issues of behavioral ethics in machine
learning implementations by organizations have not been thoroughly addressed in
the literature, because many of the necessary concepts are dispersed across
three literatures: ethics, machine learning, and management. Further, tradeoffs
between fairness criteria in machine learning have not been addressed with
regards to organizations. We advance the research by introducing an organizing
framework for selecting and implementing fair algorithms in organizations.
Related papers
- Can Fairness be Automated? Guidelines and Opportunities for
Fairness-aware AutoML [52.86328317233883]
We present a comprehensive overview of different ways in which fairness-related harm can arise.
We highlight several open technical challenges for future work in this direction.
arXiv Detail & Related papers (2023-03-15T09:40:08Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Developing a Philosophical Framework for Fair Machine Learning: Lessons
From The Case of Algorithmic Collusion [0.0]
As machine learning algorithms are applied in new contexts the harms and injustices that result are qualitatively different.
The existing research paradigm in machine learning which develops metrics and definitions of fairness cannot account for these qualitatively different types of injustice.
I propose an ethical framework for researchers and practitioners in machine learning seeking to develop and apply fairness metrics.
arXiv Detail & Related papers (2022-07-05T16:21:56Z) - A Framework for Fairness: A Systematic Review of Existing Fair AI
Solutions [4.594159253008448]
A large portion of fairness research has gone to producing tools that machine learning practitioners can use to audit for bias while designing their algorithms.
There is a lack of application of these fairness solutions in practice.
This review provides an in-depth summary of the algorithmic bias issues that have been defined and the fairness solution space that has been proposed.
arXiv Detail & Related papers (2021-12-10T17:51:20Z) - Individual Explanations in Machine Learning Models: A Survey for
Practitioners [69.02688684221265]
The use of sophisticated statistical models that influence decisions in domains of high societal relevance is on the rise.
Many governments, institutions, and companies are reluctant to their adoption as their output is often difficult to explain in human-interpretable ways.
Recently, the academic literature has proposed a substantial amount of methods for providing interpretable explanations to machine learning models.
arXiv Detail & Related papers (2021-04-09T01:46:34Z) - Bridging Machine Learning and Mechanism Design towards Algorithmic
Fairness [6.6358581196331095]
We argue that building fair decision-making systems requires overcoming limitations inherent to each field.
We begin to lay the ground work towards this goal by comparing the perspective each discipline takes on fair decision-making.
arXiv Detail & Related papers (2020-10-12T03:42:20Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z) - Getting Fairness Right: Towards a Toolbox for Practitioners [2.4364387374267427]
The potential risk of AI systems unintentionally embedding and reproducing bias has attracted the attention of machine learning practitioners and society at large.
This paper proposes to draft a toolbox which helps practitioners to ensure fair AI practices.
arXiv Detail & Related papers (2020-03-15T20:53:50Z) - Towards CRISP-ML(Q): A Machine Learning Process Model with Quality
Assurance Methodology [53.063411515511056]
We propose a process model for the development of machine learning applications.
The first phase combines business and data understanding as data availability oftentimes affects the feasibility of the project.
The sixth phase covers state-of-the-art approaches for monitoring and maintenance of a machine learning applications.
arXiv Detail & Related papers (2020-03-11T08:25:49Z) - FAE: A Fairness-Aware Ensemble Framework [18.993049769711114]
FAE (Fairness-Aware Ensemble) framework combines fairness-related interventions at both pre- and postprocessing steps of the data analysis process.
In the preprocessing step, we tackle the problems of under-representation of the protected group and of class-imbalance.
In the post-processing step, we tackle the problem of class overlapping by shifting the decision boundary in the direction of fairness.
arXiv Detail & Related papers (2020-02-03T13:05:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.