Nonstationary Dual Averaging and Online Fair Allocation
- URL: http://arxiv.org/abs/2202.11614v2
- Date: Mon, 17 Oct 2022 16:53:44 GMT
- Title: Nonstationary Dual Averaging and Online Fair Allocation
- Authors: Luofeng Liao, Yuan Gao, Christian Kroer
- Abstract summary: We develop new online learning results for the dual averaging algorithm under nonstationary input models.
Our results apply to several nonstationary input models: adversarial corruption, ergodic input, and blockindependent (including periodic) input.
- Score: 32.85609942201268
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of fairly allocating items to a set of individuals,
when the items are arriving online. A central solution concept in fair
allocation is competitive equilibrium: every individual is endowed with a
budget of faux currency, and the resulting competitive equilibrium is used to
allocate. For the online fair allocation context, the PACE algorithm of Gao et
al. [2021] leverages the dual averaging algorithm to approximate competitive
equilibria. The authors show that, when items arrive i.i.d, the algorithm
asymptotically achieves the fairness and efficiency guarantees of the offline
competitive equilibrium allocation. However, real-world data is typically not
stationary. One could instead model the data as adversarial, but this is often
too pessimistic in practice. Motivated by this consideration, we study an
online fair allocation setting with nonstationary item arrivals. To address
this setting, we first develop new online learning results for the dual
averaging algorithm under nonstationary input models. We show that the dual
averaging iterates converge in mean square to both the underlying optimal
solution of the "true" stochastic optimization problem as well as the
"hindsight" optimal solution of the finite-sum problem given by the sample
path. Our results apply to several nonstationary input models: adversarial
corruption, ergodic input, and block-independent (including periodic) input.
Here, the bound on the mean square error depends on a nonstationarity measure
of the input. We recover the classical bound when the input data is i.i.d. We
then show that our dual averaging results imply that the PACE algorithm for
online fair allocation simultaneously achieves "best of both worlds" guarantees
against any of these input models. Finally, we conduct numerical experiments
which show strong empirical performance against nonstationary inputs.
Related papers
- Fair Bilevel Neural Network (FairBiNN): On Balancing fairness and accuracy via Stackelberg Equilibrium [0.3350491650545292]
Current methods for mitigating bias often result in information loss and an inadequate balance between accuracy and fairness.
We propose a novel methodology grounded in bilevel optimization principles.
Our deep learning-based approach concurrently optimize for both accuracy and fairness objectives.
arXiv Detail & Related papers (2024-10-21T18:53:39Z) - Probabilistic Contrastive Learning for Long-Tailed Visual Recognition [78.70453964041718]
Longtailed distributions frequently emerge in real-world data, where a large number of minority categories contain a limited number of samples.
Recent investigations have revealed that supervised contrastive learning exhibits promising potential in alleviating the data imbalance.
We propose a novel probabilistic contrastive (ProCo) learning algorithm that estimates the data distribution of the samples from each class in the feature space.
arXiv Detail & Related papers (2024-03-11T13:44:49Z) - Preventing Discriminatory Decision-making in Evolving Data Streams [8.952662914331901]
Bias in machine learning has rightly received significant attention over the last decade.
Most fair machine learning (fair-ML) work to address bias in decision-making systems has focused solely on the offline setting.
Despite the wide prevalence of online systems in the real world, work on identifying and correcting bias in the online setting is severely lacking.
arXiv Detail & Related papers (2023-02-16T01:20:08Z) - Learning Equilibria in Matching Markets from Bandit Feedback [139.29934476625488]
We develop a framework and algorithms for learning stable market outcomes under uncertainty.
Our work takes a first step toward elucidating when and how stable matchings arise in large, data-driven marketplaces.
arXiv Detail & Related papers (2021-08-19T17:59:28Z) - FairBalance: How to Achieve Equalized Odds With Data Pre-processing [15.392349679172707]
This research seeks to benefit the software engineering society by providing a simple yet effective pre-processing approach to achieve equalized odds fairness in machine learning software.
We proposed FairBalance, a pre-processing algorithm which balances the class distribution in each demographic group by assigning calculated weights to the training data.
arXiv Detail & Related papers (2021-07-17T20:40:45Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - The Best of Many Worlds: Dual Mirror Descent for Online Allocation
Problems [7.433931244705934]
We consider a data-driven setting in which the reward and resource consumption of each request are generated using an input model unknown to the decision maker.
We design general class of algorithms that attain good performance in various input models without knowing which type of input they are facing.
Our algorithms operate in the Lagrangian dual space: they maintain a dual multiplier for each resource that is updated using online mirror descent.
arXiv Detail & Related papers (2020-11-18T18:39:17Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - Regularized Online Allocation Problems: Fairness and Beyond [7.433931244705934]
We introduce the emphregularized online allocation problem, a variant that includes a non-linear regularizer acting on the total resource consumption.
In this problem, requests repeatedly arrive over time and, for each request, a decision maker needs to take an action that generates a reward and consumes resources.
The objective is to simultaneously maximize additively separable rewards and the value of a non-separable regularizer subject to the resource constraints.
arXiv Detail & Related papers (2020-07-01T14:24:58Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.