LimeOut: An Ensemble Approach To Improve Process Fairness
- URL: http://arxiv.org/abs/2006.10531v1
- Date: Wed, 17 Jun 2020 09:00:58 GMT
- Title: LimeOut: An Ensemble Approach To Improve Process Fairness
- Authors: Vaishnavi Bhargava, Miguel Couceiro, Amedeo Napoli
- Abstract summary: We propose a framework that relies on "feature drop-out" to tackle process fairness.
We make use of "LIME Explanations" to assess a classifier's fairness and to determine the sensitive features to remove.
This produces a pool of classifiers whose ensemble is shown empirically to be less dependent on sensitive features, and with improved or no impact on accuracy.
- Score: 8.9379057739817
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Artificial Intelligence and Machine Learning are becoming increasingly
present in several aspects of human life, especially, those dealing with
decision making. Many of these algorithmic decisions are taken without human
supervision and through decision making processes that are not transparent.
This raises concerns regarding the potential bias of these processes towards
certain groups of society, which may entail unfair results and, possibly,
violations of human rights. Dealing with such biased models is one of the major
concerns to maintain the public trust.
In this paper, we address the question of process or procedural fairness.
More precisely, we consider the problem of making classifiers fairer by
reducing their dependence on sensitive features while increasing (or, at least,
maintaining) their accuracy. To achieve both, we draw inspiration from
"dropout" techniques in neural based approaches, and propose a framework that
relies on "feature drop-out" to tackle process fairness. We make use of "LIME
Explanations" to assess a classifier's fairness and to determine the sensitive
features to remove. This produces a pool of classifiers (through feature
dropout) whose ensemble is shown empirically to be less dependent on sensitive
features, and with improved or no impact on accuracy.
Related papers
- What Hides behind Unfairness? Exploring Dynamics Fairness in Reinforcement Learning [52.51430732904994]
In reinforcement learning problems, agents must consider long-term fairness while maximizing returns.
Recent works have proposed many different types of fairness notions, but how unfairness arises in RL problems remains unclear.
We introduce a novel notion called dynamics fairness, which explicitly captures the inequality stemming from environmental dynamics.
arXiv Detail & Related papers (2024-04-16T22:47:59Z) - Tuning-Free Accountable Intervention for LLM Deployment -- A
Metacognitive Approach [55.613461060997004]
Large Language Models (LLMs) have catalyzed transformative advances across a spectrum of natural language processing tasks.
We propose an innovative textitmetacognitive approach, dubbed textbfCLEAR, to equip LLMs with capabilities for self-aware error identification and correction.
arXiv Detail & Related papers (2024-03-08T19:18:53Z) - A Sequentially Fair Mechanism for Multiple Sensitive Attributes [0.46040036610482665]
In the standard use case of Algorithmic Fairness, the goal is to eliminate the relationship between a sensitive variable and a corresponding score.
We propose a sequential framework, which allows to progressively achieve fairness across a set of sensitive features.
Our approach seamlessly extends to approximate fairness, enveloping a framework accommodating the trade-off between risk and unfairness.
arXiv Detail & Related papers (2023-09-12T22:31:57Z) - Fairness Explainability using Optimal Transport with Applications in
Image Classification [0.46040036610482665]
We propose a comprehensive approach to uncover the causes of discrimination in Machine Learning applications.
We leverage Wasserstein barycenters to achieve fair predictions and introduce an extension to pinpoint bias-associated regions.
This allows us to derive a cohesive system which uses the enforced fairness to measure each features influence emphon the bias.
arXiv Detail & Related papers (2023-08-22T00:10:23Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - A Novel Approach to Fairness in Automated Decision-Making using
Affective Normalization [2.0178765779788495]
We propose a method for measuring the affective, socially biased, component, thus enabling its removal.
That is, given a decision-making process, these affective measurements remove the affective bias in the decision, rendering it fair across a set of categories defined by the method itself.
arXiv Detail & Related papers (2022-05-02T11:48:53Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Fairness of Machine Learning Algorithms in Demography [0.0]
The paper is devoted to the study of the model fairness and process fairness of the Russian demographic dataset.
We took inspiration from "dropout" techniques in neural-based approaches and suggested a model that uses "feature drop-out" to address process fairness.
arXiv Detail & Related papers (2022-02-02T13:12:35Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - Making ML models fairer through explanations: the case of LimeOut [7.952582509792971]
Algorithmic decisions are now being used on a daily basis, and based on Machine Learning (ML) processes that may be complex and biased.
This raises several concerns given the critical impact that biased decisions may have on individuals or on society as a whole.
We show how the simple idea of "feature dropout" followed by an "ensemble approach" can improve model fairness.
arXiv Detail & Related papers (2020-11-01T19:07:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.