The Isotonic Mechanism for Exponential Family Estimation
- URL: http://arxiv.org/abs/2304.11160v3
- Date: Mon, 2 Oct 2023 14:33:05 GMT
- Title: The Isotonic Mechanism for Exponential Family Estimation
- Authors: Yuling Yan, Weijie J. Su, Jianqing Fan
- Abstract summary: In 2023, the International Conference on Machine Learning (ICML) required authors with multiple submissions to rank their submissions based on perceived quality.
In this paper, we aim to employ these author-specified rankings to enhance peer review in machine learning and artificial intelligence conferences.
This mechanism generates adjusted scores that closely align with the original scores while adhering to author-specified rankings.
- Score: 31.542906034919977
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In 2023, the International Conference on Machine Learning (ICML) required
authors with multiple submissions to rank their submissions based on perceived
quality. In this paper, we aim to employ these author-specified rankings to
enhance peer review in machine learning and artificial intelligence conferences
by extending the Isotonic Mechanism to exponential family distributions. This
mechanism generates adjusted scores that closely align with the original scores
while adhering to author-specified rankings. Despite its applicability to a
broad spectrum of exponential family distributions, implementing this mechanism
does not require knowledge of the specific distribution form. We demonstrate
that an author is incentivized to provide accurate rankings when her utility
takes the form of a convex additive function of the adjusted review scores. For
a certain subclass of exponential family distributions, we prove that the
author reports truthfully only if the question involves only pairwise
comparisons between her submissions, thus indicating the optimality of ranking
in truthful information elicitation. Moreover, we show that the adjusted scores
improve dramatically the estimation accuracy compared to the original scores
and achieve nearly minimax optimality when the ground-truth scores have bounded
total variation. We conclude the paper by presenting experiments conducted on
the ICML 2023 ranking data, which show significant estimation gain using the
Isotonic Mechanism.
Related papers
- A Data Envelopment Analysis Approach for Assessing Fairness in Resource Allocation: Application to Kidney Exchange Programs [3.130722489512822]
We present a novel framework leveraging Data Envelopment Analysis (DEA) to evaluate fairness criteria.
We analyze Priority fairness through waitlist durations, Access fairness through Kidney Donor Profile Index scores, and Outcome fairness through graft lifespan.
Our study provides a rigorous framework for evaluating fairness in complex resource allocation systems.
arXiv Detail & Related papers (2024-09-18T15:17:43Z) - Analysis of the ICML 2023 Ranking Data: Can Authors' Opinions of Their Own Papers Assist Peer Review in Machine Learning? [52.00419656272129]
We conducted an experiment during the 2023 International Conference on Machine Learning (ICML)
We received 1,342 rankings, each from a distinct author, pertaining to 2,592 submissions.
We focus on the Isotonic Mechanism, which calibrates raw review scores using author-provided rankings.
arXiv Detail & Related papers (2024-08-24T01:51:23Z) - Evaluating Human Alignment and Model Faithfulness of LLM Rationale [66.75309523854476]
We study how well large language models (LLMs) explain their generations through rationales.
We show that prompting-based methods are less "faithful" than attribution-based explanations.
arXiv Detail & Related papers (2024-06-28T20:06:30Z) - Kernel Density Estimation for Multiclass Quantification [52.419589623702336]
Quantification is the supervised machine learning task concerned with obtaining accurate predictors of class prevalence.
The distribution-matching (DM) approaches represent one of the most important families among the quantification methods that have been proposed in the literature so far.
We propose a new representation mechanism based on multivariate densities that we model via kernel density estimation (KDE)
arXiv Detail & Related papers (2023-12-31T13:19:27Z) - Being Aware of Localization Accuracy By Generating Predicted-IoU-Guided
Quality Scores [24.086202809990795]
We develop an elegant LQE branch to acquire localization quality score guided by predicted IoU.
A novel one stage detector termed CLQ is proposed.
Experiments show that CLQ achieves state-of-the-arts' performance at an accuracy of 47.8 AP and a speed of 11.5 fps.
arXiv Detail & Related papers (2023-09-23T05:27:59Z) - Conformalized Fairness via Quantile Regression [8.180169144038345]
We propose a novel framework to learn a real-valued quantile function under the fairness requirement of Demographic Parity.
We establish theoretical guarantees of distribution-free coverage and exact fairness for the induced prediction interval constructed by fair quantiles.
Our results show the model's ability to uncover the mechanism underlying the fairness-accuracy trade-off in a wide range of societal and medical applications.
arXiv Detail & Related papers (2022-10-05T04:04:15Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - You Are the Best Reviewer of Your Own Papers: An Owner-Assisted Scoring
Mechanism [17.006003864727408]
Isotonic mechanism improves on imprecise raw scores by leveraging certain information that the owner is incentivized to provide.
It reports adjusted scores for the items by solving a convex optimization problem.
I prove that the adjusted scores provided by this owner-assisted mechanism are indeed significantly more accurate than the raw scores provided by the reviewers.
arXiv Detail & Related papers (2021-10-27T22:11:29Z) - Test-time Collective Prediction [73.74982509510961]
Multiple parties in machine learning want to jointly make predictions on future test points.
Agents wish to benefit from the collective expertise of the full set of agents, but may not be willing to release their data or model parameters.
We explore a decentralized mechanism to make collective predictions at test time, leveraging each agent's pre-trained model.
arXiv Detail & Related papers (2021-06-22T18:29:58Z) - Selective Classification Can Magnify Disparities Across Groups [89.14499988774985]
We find that while selective classification can improve average accuracies, it can simultaneously magnify existing accuracy disparities.
Increasing abstentions can even decrease accuracies on some groups.
We train distributionally-robust models that achieve similar full-coverage accuracies across groups and show that selective classification uniformly improves each group.
arXiv Detail & Related papers (2020-10-27T08:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.