Beyond Submodularity: A Unified Framework of Randomized Set Selection
with Group Fairness Constraints
- URL: http://arxiv.org/abs/2304.06596v1
- Date: Thu, 13 Apr 2023 15:02:37 GMT
- Title: Beyond Submodularity: A Unified Framework of Randomized Set Selection
with Group Fairness Constraints
- Authors: Shaojie Tang, Jing Yuan
- Abstract summary: We introduce a unified framework for randomized subset selection that incorporates group fairness constraints.
Our problem involves a global utility function and a set of group utility functions for each group.
Our aim is to generate a distribution across feasible subsets, specifying the selection probability of each feasible set.
- Score: 19.29174615532181
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning algorithms play an important role in a variety of important
decision-making processes, including targeted advertisement displays, home loan
approvals, and criminal behavior predictions. Given the far-reaching impact of
these algorithms, it is crucial that they operate fairly, free from bias or
prejudice towards certain groups in the population. Ensuring impartiality in
these algorithms is essential for promoting equality and avoiding
discrimination. To this end we introduce a unified framework for randomized
subset selection that incorporates group fairness constraints. Our problem
involves a global utility function and a set of group utility functions for
each group, here a group refers to a group of individuals (e.g., people)
sharing the same attributes (e.g., gender). Our aim is to generate a
distribution across feasible subsets, specifying the selection probability of
each feasible set, to maximize the global utility function while meeting a
predetermined quota for each group utility function in expectation. Note that
there may not necessarily be any direct connections between the global utility
function and each group utility function. We demonstrate that this framework
unifies and generalizes many significant applications in machine learning and
operations research. Our algorithmic results either improves the best known
result or provide the first approximation algorithms for new applications.
Related papers
- Minimax Group Fairness in Strategic Classification [8.250258160056514]
In strategic classification, agents manipulate their features, at a cost, to receive a positive classification outcome from the learner's classifier.
We consider learning objectives that have group fairness guarantees in addition to accuracy guarantees.
We formalize a fairness-aware Stackelberg game between a population of agents consisting of several groups, with each group having its own cost function.
arXiv Detail & Related papers (2024-10-03T14:22:55Z) - Federated Fairness without Access to Sensitive Groups [12.888927461513472]
Current approaches to group fairness in federated learning assume the existence of predefined and labeled sensitive groups during training.
We propose a new approach to guarantee group fairness that does not rely on any predefined definition of sensitive groups or additional labels.
arXiv Detail & Related papers (2024-02-22T19:24:59Z) - A Canonical Data Transformation for Achieving Inter- and Within-group Fairness [17.820200610132265]
We introduce a formal definition of within-group fairness that maintains fairness among individuals from within the same group.
We propose a pre-processing framework to meet both inter- and within-group fairness criteria with little compromise in accuracy.
We apply this framework to the COMPAS risk assessment and Law School datasets and compare its performance to two regularization-based methods.
arXiv Detail & Related papers (2023-10-23T17:00:20Z) - Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment [54.179859639868646]
We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
arXiv Detail & Related papers (2023-07-27T07:42:44Z) - Achieving Long-term Fairness in Submodular Maximization through
Randomization [16.33001220320682]
It is important to implement fairness-aware algorithms when dealing with data items that may contain sensitive attributes like race or gender.
We investigate the problem of maximizing a monotone submodular function while meeting group fairness constraints.
arXiv Detail & Related papers (2023-04-10T16:39:19Z) - Fair Labeled Clustering [28.297893914525517]
We consider the downstream application of clustering and how group fairness should be ensured for such a setting.
We provide algorithms for such problems and show that in contrast to their NP-hard counterparts in group fair clustering, they permit efficient solutions.
We also consider a well-motivated alternative setting where the decision-maker is free to assign labels to the clusters regardless of the centers' positions in the metric space.
arXiv Detail & Related papers (2022-05-28T07:07:12Z) - Towards Group Robustness in the presence of Partial Group Labels [61.33713547766866]
spurious correlations between input samples and the target labels wrongly direct the neural network predictions.
We propose an algorithm that optimize for the worst-off group assignments from a constraint set.
We show improvements in the minority group's performance while preserving overall aggregate accuracy across groups.
arXiv Detail & Related papers (2022-01-10T22:04:48Z) - Focus on the Common Good: Group Distributional Robustness Follows [47.62596240492509]
This paper proposes a new and simple algorithm that explicitly encourages learning of features that are shared across various groups.
While Group-DRO focuses on groups with worst regularized loss, focusing instead, on groups that enable better performance even on other groups, could lead to learning of shared/common features.
arXiv Detail & Related papers (2021-10-06T09:47:41Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z) - Randomized Entity-wise Factorization for Multi-Agent Reinforcement
Learning [59.62721526353915]
Multi-agent settings in the real world often involve tasks with varying types and quantities of agents and non-agent entities.
Our method aims to leverage these commonalities by asking the question: What is the expected utility of each agent when only considering a randomly selected sub-group of its observed entities?''
arXiv Detail & Related papers (2020-06-07T18:28:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.