Approximate Proportionality in Online Fair Division
- URL: http://arxiv.org/abs/2508.03253v1
- Date: Tue, 05 Aug 2025 09:31:42 GMT
- Title: Approximate Proportionality in Online Fair Division
- Authors: Davin Choo, Winston Fu, Derek Khu, Tzeh Yuan Neoh, Tze-Yang Poon, Nicholas Teh,
- Abstract summary: We focus on proportionality up to one good (PROP1), a natural relaxation of proportionality whose approximability remains unresolved.<n>We show that three natural greedy algorithms fail to guarantee any positive approximation to PROP1 in general, against an adaptive adversary.<n>We present an algorithm that obtains robust approximation ratios against PROP1 when given predictions of the maximum item value.
- Score: 4.200214709723945
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the online fair division problem, where indivisible goods arrive sequentially and must be allocated immediately and irrevocably to agents. Prior work has established strong impossibility results for approximating classic fairness notions, such as envy-freeness and maximin share fairness, in this setting. In contrast, we focus on proportionality up to one good (PROP1), a natural relaxation of proportionality whose approximability remains unresolved. We begin by showing that three natural greedy algorithms fail to guarantee any positive approximation to PROP1 in general, against an adaptive adversary. This is surprising because greedy algorithms are commonly used in fair division and a natural greedy algorithm is known to be able to achieve PROP1 under additional information assumptions. This hardness result motivates the study of non-adaptive adversaries and the use of side-information, in the spirit of learning-augmented algorithms. For non-adaptive adversaries, we show that the simple uniformly random allocation can achieve a meaningful PROP1 approximation with high probability. Meanwhile, we present an algorithm that obtain robust approximation ratios against PROP1 when given predictions of the maximum item value (MIV). Interestingly, we also show that stronger fairness notions such as EF1, MMS, and PROPX remain inapproximable even with perfect MIV predictions.
Related papers
- Provably Mitigating Overoptimization in RLHF: Your SFT Loss is Implicitly an Adversarial Regularizer [52.09480867526656]
We identify the source of misalignment as a form of distributional shift and uncertainty in learning human preferences.<n>To mitigate overoptimization, we first propose a theoretical algorithm that chooses the best policy for an adversarially chosen reward model.<n>Using the equivalence between reward models and the corresponding optimal policy, the algorithm features a simple objective that combines a preference optimization loss and a supervised learning loss.
arXiv Detail & Related papers (2024-05-26T05:38:50Z) - Rethinking Algorithmic Fairness for Human-AI Collaboration [29.334511328067777]
Existing approaches to algorithmic fairness aim to ensure equitable outcomes if human decision-makers comply perfectly with algorithms.<n>We show that it may be infeasible to design algorithmic recommendations that are simultaneously fair in isolation, compliance-robustly fair, and more accurate than the human policy.
arXiv Detail & Related papers (2023-10-05T16:21:42Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Metrizing Fairness [5.323439381187456]
We study supervised learning problems that have significant effects on individuals from two demographic groups.
We seek predictors that are fair with respect to a group fairness criterion such as statistical parity (SP)
In this paper, we identify conditions under which hard SP constraints are guaranteed to improve predictive accuracy.
arXiv Detail & Related papers (2022-05-30T12:28:10Z) - Fair Bayes-Optimal Classifiers Under Predictive Parity [33.648053823193855]
This paper considers predictive parity, which requires equalizing the probability of success given a positive prediction among different protected groups.
We propose an algorithm we call FairBayes-DPP, aiming to ensure predictive parity when our condition is satisfied.
arXiv Detail & Related papers (2022-05-15T04:58:10Z) - Domain Adaptation meets Individual Fairness. And they get along [48.95808607591299]
We show that algorithmic fairness interventions can help machine learning models overcome distribution shifts.
In particular, we show that enforcing suitable notions of individual fairness (IF) can improve the out-of-distribution accuracy of ML models.
arXiv Detail & Related papers (2022-05-01T16:19:55Z) - Learning fair representation with a parametric integral probability
metric [2.544539499281093]
We propose a new adversarial training scheme for learning fair representation (LFR)
In this paper, we derive theoretical relations between the fairness of representation and the fairness of the prediction model built on the top of the representation.
Our proposed LFR algorithm is computationally lighter and more stable, and the final prediction model is competitive or superior to other LFR algorithms.
arXiv Detail & Related papers (2022-02-07T05:02:23Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - Selective Classification via One-Sided Prediction [54.05407231648068]
One-sided prediction (OSP) based relaxation yields an SC scheme that attains near-optimal coverage in the practically relevant high target accuracy regime.
We theoretically derive bounds generalization for SC and OSP, and empirically we show that our scheme strongly outperforms state of the art methods in coverage at small error levels.
arXiv Detail & Related papers (2020-10-15T16:14:27Z) - Achieving Proportionality up to the Maximin Item with Indivisible Goods [14.002498730240902]
We study the problem of fairly allocating indivisible goods and focus on the classic fairness notion of proportionality.
Recent work has established that even approximate versions of proportionality (PROPx) may be impossible to achieve even for small instances.
We show how to reach an allocation satisfying this notion for any instance involving up to five agents with additive valuations.
arXiv Detail & Related papers (2020-09-20T19:21:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.