Test-time Collective Prediction
- URL: http://arxiv.org/abs/2106.12012v1
- Date: Tue, 22 Jun 2021 18:29:58 GMT
- Title: Test-time Collective Prediction
- Authors: Celestine Mendler-D\"unner, Wenshuo Guo, Stephen Bates, Michael I.
Jordan
- Abstract summary: Multiple parties in machine learning want to jointly make predictions on future test points.
Agents wish to benefit from the collective expertise of the full set of agents, but may not be willing to release their data or model parameters.
We explore a decentralized mechanism to make collective predictions at test time, leveraging each agent's pre-trained model.
- Score: 73.74982509510961
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An increasingly common setting in machine learning involves multiple parties,
each with their own data, who want to jointly make predictions on future test
points. Agents wish to benefit from the collective expertise of the full set of
agents to make better predictions than they would individually, but may not be
willing to release their data or model parameters. In this work, we explore a
decentralized mechanism to make collective predictions at test time, leveraging
each agent's pre-trained model without relying on external validation, model
retraining, or data pooling. Our approach takes inspiration from the literature
in social science on human consensus-making. We analyze our mechanism
theoretically, showing that it converges to inverse meansquared-error (MSE)
weighting in the large-sample limit. To compute error bars on the collective
predictions we propose a decentralized Jackknife procedure that evaluates the
sensitivity of our mechanism to a single agent's prediction. Empirically, we
demonstrate that our scheme effectively combines models with differing quality
across the input space. The proposed consensus prediction achieves significant
gains over classical model averaging, and even outperforms weighted averaging
schemes that have access to additional validation data.
Related papers
- Ranking and Combining Latent Structured Predictive Scores without Labeled Data [2.5064967708371553]
This paper introduces a novel structured unsupervised ensemble learning model (SUEL)
It exploits the dependency between a set of predictors with continuous predictive scores, rank the predictors without labeled data and combine them to an ensembled score with weights.
The efficacy of the proposed methods is rigorously assessed through both simulation studies and real-world application of risk genes discovery.
arXiv Detail & Related papers (2024-08-14T20:14:42Z) - Performative Prediction on Games and Mechanism Design [69.7933059664256]
We study a collective risk dilemma where agents decide whether to trust predictions based on past accuracy.
As predictions shape collective outcomes, social welfare arises naturally as a metric of concern.
We show how to achieve better trade-offs and use them for mechanism design.
arXiv Detail & Related papers (2024-08-09T16:03:44Z) - Building Socially-Equitable Public Models [32.35090986784889]
Public models offer predictions to a variety of downstream tasks and have played a crucial role in various AI applications.
We advocate for integrating the objectives of downstream agents into the optimization process.
We propose a novel Equitable Objective to address performance disparities and foster fairness among heterogeneous agents in training.
arXiv Detail & Related papers (2024-06-04T21:27:43Z) - Predictive Churn with the Set of Good Models [64.05949860750235]
We study the effect of conflicting predictions over the set of near-optimal machine learning models.
We present theoretical results on the expected churn between models within the Rashomon set.
We show how our approach can be used to better anticipate, reduce, and avoid churn in consumer-facing applications.
arXiv Detail & Related papers (2024-02-12T16:15:25Z) - Source-Free Unsupervised Domain Adaptation with Hypothesis Consolidation
of Prediction Rationale [53.152460508207184]
Source-Free Unsupervised Domain Adaptation (SFUDA) is a challenging task where a model needs to be adapted to a new domain without access to target domain labels or source domain data.
This paper proposes a novel approach that considers multiple prediction hypotheses for each sample and investigates the rationale behind each hypothesis.
To achieve the optimal performance, we propose a three-step adaptation process: model pre-adaptation, hypothesis consolidation, and semi-supervised learning.
arXiv Detail & Related papers (2024-02-02T05:53:22Z) - Boosted Control Functions [10.503777692702952]
This work aims to bridge the gap between causal effect estimation and prediction tasks.
We establish a novel connection between the field of distribution from machine learning, and simultaneous equation models and control function from econometrics.
Within this framework, we propose a strong notion of invariance for a predictive model and compare it with existing (weaker) versions.
arXiv Detail & Related papers (2023-10-09T15:43:46Z) - Distributionally Robust Machine Learning with Multi-source Data [6.383451076043423]
We introduce a group distributionally robust prediction model to optimize an adversarial reward about explained variance with respect to a class of target distributions.
Compared to classical empirical risk minimization, the proposed robust prediction model improves the prediction accuracy for target populations with distribution shifts.
We demonstrate the performance of our proposed group distributionally robust method on simulated and real data with random forests and neural networks as base-learning algorithms.
arXiv Detail & Related papers (2023-09-05T13:19:40Z) - Improving Adaptive Conformal Prediction Using Self-Supervised Learning [72.2614468437919]
We train an auxiliary model with a self-supervised pretext task on top of an existing predictive model and use the self-supervised error as an additional feature to estimate nonconformity scores.
We empirically demonstrate the benefit of the additional information using both synthetic and real data on the efficiency (width), deficit, and excess of conformal prediction intervals.
arXiv Detail & Related papers (2023-02-23T18:57:14Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z) - Robust Validation: Confident Predictions Even When Distributions Shift [19.327409270934474]
We describe procedures for robust predictive inference, where a model provides uncertainty estimates on its predictions rather than point predictions.
We present a method that produces prediction sets (almost exactly) giving the right coverage level for any test distribution in an $f$-divergence ball around the training population.
An essential component of our methodology is to estimate the amount of expected future data shift and build robustness to it.
arXiv Detail & Related papers (2020-08-10T17:09:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.