Building Socially-Equitable Public Models
- URL: http://arxiv.org/abs/2406.02790v1
- Date: Tue, 4 Jun 2024 21:27:43 GMT
- Title: Building Socially-Equitable Public Models
- Authors: Yejia Liu, Jianyi Yang, Pengfei Li, Tongxin Li, Shaolei Ren,
- Abstract summary: Public models offer predictions to a variety of downstream tasks and have played a crucial role in various AI applications.
We advocate for integrating the objectives of downstream agents into the optimization process.
We propose a novel Equitable Objective to address performance disparities and foster fairness among heterogeneous agents in training.
- Score: 32.35090986784889
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Public models offer predictions to a variety of downstream tasks and have played a crucial role in various AI applications, showcasing their proficiency in accurate predictions. However, the exclusive emphasis on prediction accuracy may not align with the diverse end objectives of downstream agents. Recognizing the public model's predictions as a service, we advocate for integrating the objectives of downstream agents into the optimization process. Concretely, to address performance disparities and foster fairness among heterogeneous agents in training, we propose a novel Equitable Objective. This objective, coupled with a policy gradient algorithm, is crafted to train the public model to produce a more equitable/uniform performance distribution across downstream agents, each with their unique concerns. Both theoretical analysis and empirical case studies have proven the effectiveness of our method in advancing performance equity across diverse downstream agents utilizing the public model for their decision-making. Codes and datasets are released at https://github.com/Ren-Research/Socially-Equitable-Public-Models.
Related papers
- Reconciling Model Multiplicity for Downstream Decision Making [24.335927243672952]
We show that even when the two predictive models approximately agree on their individual predictions almost everywhere, it is still possible for their induced best-response actions to differ on a substantial portion of the population.
We propose a framework that calibrates the predictive models with regard to both the downstream decision-making problem and the individual probability prediction.
arXiv Detail & Related papers (2024-05-30T03:36:46Z) - Learning Efficient and Fair Policies for Uncertainty-Aware Collaborative Human-Robot Order Picking [11.997524293204368]
In collaborative human-robot order picking systems, human pickers and Autonomous Mobile Robots (AMRs) travel independently through a warehouse and meet at pick locations where pickers load items onto AMRs.
We propose a novel multi-objective Deep Reinforcement Learning (DRL) approach to learn effective allocation policies to pick efficiency while also aiming to improve workload fairness amongst human pickers.
arXiv Detail & Related papers (2024-04-09T11:45:16Z) - Certified Human Trajectory Prediction [66.1736456453465]
Tray prediction plays an essential role in autonomous vehicles.
We propose a certification approach tailored for the task of trajectory prediction.
We address the inherent challenges associated with trajectory prediction, including unbounded outputs, and mutli-modality.
arXiv Detail & Related papers (2024-03-20T17:41:35Z) - Fair Multivariate Adaptive Regression Splines for Ensuring Equity and
Transparency [1.124958340749622]
We propose a fair predictive model based on MARS that incorporates fairness measures in the learning process.
MARS is a non-parametric regression model that performs feature selection, handles non-linear relationships, generates interpretable decision rules, and derives optimal splitting criteria on the variables.
We apply our fairMARS model to real-world data and demonstrate its effectiveness in terms of accuracy and equity.
arXiv Detail & Related papers (2024-02-23T19:02:24Z) - Travel Demand Forecasting: A Fair AI Approach [0.9383397937755517]
We propose a novel methodology to develop fairness-aware, highly-accurate travel demand forecasting models.
Specifically, we introduce a new fairness regularization term, which is explicitly designed to measure the correlation between prediction accuracy and protected attributes.
Results highlight that our proposed methodology can effectively enhance fairness for multiple protected attributes while preserving prediction accuracy.
arXiv Detail & Related papers (2023-03-03T03:16:54Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Test-time Collective Prediction [73.74982509510961]
Multiple parties in machine learning want to jointly make predictions on future test points.
Agents wish to benefit from the collective expertise of the full set of agents, but may not be willing to release their data or model parameters.
We explore a decentralized mechanism to make collective predictions at test time, leveraging each agent's pre-trained model.
arXiv Detail & Related papers (2021-06-22T18:29:58Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z) - Estimating Generalization under Distribution Shifts via Domain-Invariant
Representations [75.74928159249225]
We use a set of domain-invariant predictors as a proxy for the unknown, true target labels.
The error of the resulting risk estimate depends on the target risk of the proxy model.
arXiv Detail & Related papers (2020-07-06T17:21:24Z) - Mind the Trade-off: Debiasing NLU Models without Degrading the
In-distribution Performance [70.31427277842239]
We introduce a novel debiasing method called confidence regularization.
It discourages models from exploiting biases while enabling them to receive enough incentive to learn from all the training examples.
We evaluate our method on three NLU tasks and show that, in contrast to its predecessors, it improves the performance on out-of-distribution datasets.
arXiv Detail & Related papers (2020-05-01T11:22:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.