Proportional Fairness in Clustering: A Social Choice Perspective
- URL: http://arxiv.org/abs/2310.18162v2
- Date: Fri, 24 May 2024 08:52:23 GMT
- Title: Proportional Fairness in Clustering: A Social Choice Perspective
- Authors: Leon Kellerhals, Jannik Peters,
- Abstract summary: We study the proportional clustering problem of Chen et al. [ICML'19] and relate it to the area of multiwinner voting in computational social choice.
We show that any approximation to proportional fairness is also an approximation to individual fairness and vice versa.
- Score: 14.226371312639145
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the proportional clustering problem of Chen et al. [ICML'19] and relate it to the area of multiwinner voting in computational social choice. We show that any clustering satisfying a weak proportionality notion of Brill and Peters [EC'23] simultaneously obtains the best known approximations to the proportional fairness notion of Chen et al. [ICML'19], but also to individual fairness [Jung et al., FORC'20] and the "core" [Li et al. ICML'21]. In fact, we show that any approximation to proportional fairness is also an approximation to individual fairness and vice versa. Finally, we also study stronger notions of proportional representation, in which deviations do not only happen to single, but multiple candidate centers, and show that stronger proportionality notions of Brill and Peters [EC'23] imply approximations to these stronger guarantees.
Related papers
- Method of Equal Shares with Bounded Overspending [23.242224574706285]
We introduce the Method of Equal Shares with Bounded Overspending (BOS Equal Shares)
BOS Equal Shares addresses inefficiencies inherent in strict proportionality guarantees yet still provides good proportionality similar to the original Method of Equal Shares.
In the course of the analysis, we also discuss a fractional variant of the method which allows for partial funding of projects.
arXiv Detail & Related papers (2024-09-23T13:30:25Z) - Doubly Constrained Fair Clustering [11.11449872883085]
Group Fairness (GF) and Diversity in Center Selection (DS) are two most prominent demographic representation fairness notions in clustering.
We show that given a constant approximation algorithm for one constraint (GF or DS only) we can obtain a constant approximation solution that satisfies both constraints simultaneously.
We show that both GF and DS are incompatible with a collection of other distance-based fairness notions.
arXiv Detail & Related papers (2023-05-31T01:04:55Z) - Learning Informative Representation for Fairness-aware Multivariate
Time-series Forecasting: A Group-based Perspective [50.093280002375984]
Performance unfairness among variables widely exists in multivariate time series (MTS) forecasting models.
We propose a novel framework, named FairFor, for fairness-aware MTS forecasting.
arXiv Detail & Related papers (2023-01-27T04:54:12Z) - Proportional Fairness in Obnoxious Facility Location [70.64736616610202]
We propose a hierarchy of distance-based proportional fairness concepts for the problem.
We consider deterministic and randomized mechanisms, and compute tight bounds on the price of proportional fairness.
We prove existence results for two extensions to our model.
arXiv Detail & Related papers (2023-01-11T07:30:35Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - On Disentangled and Locally Fair Representations [95.6635227371479]
We study the problem of performing classification in a manner that is fair for sensitive groups, such as race and gender.
We learn a locally fair representation, such that, under the learned representation, the neighborhood of each sample is balanced in terms of the sensitive attribute.
arXiv Detail & Related papers (2022-05-05T14:26:50Z) - Strategyproof and Proportionally Fair Facility Location [77.16035689756859]
We focus on a simple, one-dimensional collective decision problem (often referred to as the facility location problem)
We analyze a hierarchy of proportionality-based fairness axioms of varying strength.
For each axiom, we characterize the family of mechanisms that satisfy the axiom and strategyproofness.
arXiv Detail & Related papers (2021-11-02T12:41:32Z) - Multi-Stage Decentralized Matching Markets: Uncertain Preferences and
Strategic Behaviors [91.3755431537592]
This article develops a framework for learning optimal strategies in real-world matching markets.
We show that there exists a welfare-versus-fairness trade-off that is characterized by the uncertainty level of acceptance.
We prove that participants can be better off with multi-stage matching compared to single-stage matching.
arXiv Detail & Related papers (2021-02-13T19:25:52Z) - Achieving Proportionality up to the Maximin Item with Indivisible Goods [14.002498730240902]
We study the problem of fairly allocating indivisible goods and focus on the classic fairness notion of proportionality.
Recent work has established that even approximate versions of proportionality (PROPx) may be impossible to achieve even for small instances.
We show how to reach an allocation satisfying this notion for any instance involving up to five agents with additive valuations.
arXiv Detail & Related papers (2020-09-20T19:21:19Z) - Distributional Individual Fairness in Clustering [7.303841123034983]
We introduce a framework for assigning individuals, embedded in a metric space, to probability distributions over a bounded number of cluster centers.
We provide an algorithm for clustering with $p$-norm objective and individual fairness constraints with provable approximation guarantee.
arXiv Detail & Related papers (2020-06-22T20:02:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.