Equal Improvability: A New Fairness Notion Considering the Long-term
Impact
- URL: http://arxiv.org/abs/2210.06732v2
- Date: Sun, 9 Apr 2023 04:52:42 GMT
- Title: Equal Improvability: A New Fairness Notion Considering the Long-term
Impact
- Authors: Ozgur Guldogan, Yuchen Zeng, Jy-yong Sohn, Ramtin Pedarsani, Kangwook
Lee
- Abstract summary: We propose a new fairness notion called Equal Improvability (EI)
EI equalizes the potential acceptance rate of the rejected samples across different groups.
We show that proposed EI-regularized algorithms encourage us to find a fair classifier in terms of EI.
- Score: 27.72859815965265
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Devising a fair classifier that does not discriminate against different
groups is an important problem in machine learning. Although researchers have
proposed various ways of defining group fairness, most of them only focused on
the immediate fairness, ignoring the long-term impact of a fair classifier
under the dynamic scenario where each individual can improve its feature over
time. Such dynamic scenarios happen in real world, e.g., college admission and
credit loaning, where each rejected sample makes effort to change its features
to get accepted afterwards. In this dynamic setting, the long-term fairness
should equalize the samples' feature distribution across different groups after
the rejected samples make some effort to improve. In order to promote long-term
fairness, we propose a new fairness notion called Equal Improvability (EI),
which equalizes the potential acceptance rate of the rejected samples across
different groups assuming a bounded level of effort will be spent by each
rejected sample. We analyze the properties of EI and its connections with
existing fairness notions. To find a classifier that satisfies the EI
requirement, we propose and study three different approaches that solve
EI-regularized optimization problems. Through experiments on both synthetic and
real datasets, we demonstrate that the proposed EI-regularized algorithms
encourage us to find a fair classifier in terms of EI. Finally, we provide
experimental results on dynamic scenarios which highlight the advantages of our
EI metric in achieving the long-term fairness. Codes are available in a GitHub
repository, see https://github.com/guldoganozgur/ei_fairness.
Related papers
- What Hides behind Unfairness? Exploring Dynamics Fairness in Reinforcement Learning [52.51430732904994]
In reinforcement learning problems, agents must consider long-term fairness while maximizing returns.
Recent works have proposed many different types of fairness notions, but how unfairness arises in RL problems remains unclear.
We introduce a novel notion called dynamics fairness, which explicitly captures the inequality stemming from environmental dynamics.
arXiv Detail & Related papers (2024-04-16T22:47:59Z) - Through the Fairness Lens: Experimental Analysis and Evaluation of
Entity Matching [17.857838691801884]
Algorithmic fairness has become a timely topic to address machine bias and its societal impacts.
Despite extensive research on these two topics, little attention has been paid to the fairness of entity matching.
We generate two social datasets for the purpose of auditing EM through the lens of fairness.
arXiv Detail & Related papers (2023-07-06T02:21:08Z) - Fair Without Leveling Down: A New Intersectional Fairness Definition [1.0958014189747356]
We propose a new definition called the $alpha$-Intersectional Fairness, which combines the absolute and the relative performance across sensitive groups.
We benchmark multiple popular in-processing fair machine learning approaches using our new fairness definition and show that they do not achieve any improvement over a simple baseline.
arXiv Detail & Related papers (2023-05-21T16:15:12Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Learning Informative Representation for Fairness-aware Multivariate
Time-series Forecasting: A Group-based Perspective [50.093280002375984]
Performance unfairness among variables widely exists in multivariate time series (MTS) forecasting models.
We propose a novel framework, named FairFor, for fairness-aware MTS forecasting.
arXiv Detail & Related papers (2023-01-27T04:54:12Z) - FaiREE: Fair Classification with Finite-Sample and Distribution-Free
Guarantee [40.10641140860374]
FaiREE is a fair classification algorithm that can satisfy group fairness constraints with finite-sample and distribution-free theoretical guarantees.
FaiREE is shown to have favorable performance over state-of-the-art algorithms.
arXiv Detail & Related papers (2022-11-28T05:16:20Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Metric-Free Individual Fairness with Cooperative Contextual Bandits [17.985752744098267]
Group fairness requires that different groups should be treated similarly which might be unfair to some individuals within a group.
Individual fairness remains understudied due to its reliance on problem-specific similarity metrics.
We propose a metric-free individual fairness and a cooperative contextual bandits algorithm.
arXiv Detail & Related papers (2020-11-13T03:10:35Z) - Adversarial Learning for Counterfactual Fairness [15.302633901803526]
In recent years, fairness has become an important topic in the machine learning research community.
We propose to rely on an adversarial neural learning approach, that enables more powerful inference than with MMD penalties.
Experiments show significant improvements in term of counterfactual fairness for both the discrete and the continuous settings.
arXiv Detail & Related papers (2020-08-30T09:06:03Z) - SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness [50.916483212900275]
We first formulate a version of individual fairness that enforces invariance on certain sensitive sets.
We then design a transport-based regularizer that enforces this version of individual fairness and develop an algorithm to minimize the regularizer efficiently.
arXiv Detail & Related papers (2020-06-25T04:31:57Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.