Are You Smarter Than a Random Expert? The Robust Aggregation of
Substitutable Signals
- URL: http://arxiv.org/abs/2111.03153v1
- Date: Thu, 4 Nov 2021 20:50:30 GMT
- Title: Are You Smarter Than a Random Expert? The Robust Aggregation of
Substitutable Signals
- Authors: Eric Neyman and Tim Roughgarden
- Abstract summary: This paper initiates the study of forecast aggregation in a context where experts' knowledge is chosen adversarially from a broad class of information structures.
Under the projective substitutes condition, taking the average of the experts' forecasts improves substantially upon the strategy of trusting a random expert.
We show that by averaging the experts' forecasts and then emphextremizing the average by moving it away from the prior by a constant factor, the aggregator's performance guarantee is substantially better than is possible without knowledge of the prior.
- Score: 14.03122229316614
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The problem of aggregating expert forecasts is ubiquitous in fields as
wide-ranging as machine learning, economics, climate science, and national
security. Despite this, our theoretical understanding of this question is
fairly shallow. This paper initiates the study of forecast aggregation in a
context where experts' knowledge is chosen adversarially from a broad class of
information structures. While in full generality it is impossible to achieve a
nontrivial performance guarantee, we show that doing so is possible under a
condition on the experts' information structure that we call \emph{projective
substitutes}. The projective substitutes condition is a notion of informational
substitutes: that there are diminishing marginal returns to learning the
experts' signals. We show that under the projective substitutes condition,
taking the average of the experts' forecasts improves substantially upon the
strategy of trusting a random expert. We then consider a more permissive
setting, in which the aggregator has access to the prior. We show that by
averaging the experts' forecasts and then \emph{extremizing} the average by
moving it away from the prior by a constant factor, the aggregator's
performance guarantee is substantially better than is possible without
knowledge of the prior. Our results give a theoretical grounding to past
empirical research on extremization and help give guidance on the appropriate
amount to extremize.
Related papers
- Robust Decision Aggregation with Adversarial Experts [4.751372843411884]
We consider a binary decision aggregation problem in the presence of both truthful and adversarial experts.
We find the optimal aggregator minimizing regret under the worst information structure.
arXiv Detail & Related papers (2024-03-13T03:47:08Z) - Inverse Reinforcement Learning with Sub-optimal Experts [56.553106680769474]
We study the theoretical properties of the class of reward functions that are compatible with a given set of experts.
Our results show that the presence of multiple sub-optimal experts can significantly shrink the set of compatible rewards.
We analyze a uniform sampling algorithm that results in being minimax optimal whenever the sub-optimal experts' performance level is sufficiently close to the one of the optimal agent.
arXiv Detail & Related papers (2024-01-08T12:39:25Z) - Hierarchical Decomposition of Prompt-Based Continual Learning:
Rethinking Obscured Sub-optimality [55.88910947643436]
Self-supervised pre-training is essential for handling vast quantities of unlabeled data in practice.
HiDe-Prompt is an innovative approach that explicitly optimize the hierarchical components with an ensemble of task-specific prompts and statistics.
Our experiments demonstrate the superior performance of HiDe-Prompt and its robustness to pre-training paradigms in continual learning.
arXiv Detail & Related papers (2023-10-11T06:51:46Z) - Consciousness-Inspired Spatio-Temporal Abstractions for Better Generalization in Reinforcement Learning [83.41487567765871]
Skipper is a model-based reinforcement learning framework.
It automatically generalizes the task given into smaller, more manageable subtasks.
It enables sparse decision-making and focused abstractions on the relevant parts of the environment.
arXiv Detail & Related papers (2023-09-30T02:25:18Z) - Incorporating Experts' Judgment into Machine Learning Models [2.5363839239628843]
In some cases, domain experts might have a judgment about the expected outcome that might conflict with the prediction of machine learning models.
We present a novel framework that aims at leveraging experts' judgment to mitigate the conflict.
arXiv Detail & Related papers (2023-04-24T07:32:49Z) - What Should I Know? Using Meta-gradient Descent for Predictive Feature
Discovery in a Single Stream of Experience [63.75363908696257]
computational reinforcement learning seeks to construct an agent's perception of the world through predictions of future sensations.
An open challenge in this line of work is determining from the infinitely many predictions that the agent could possibly make which predictions might best support decision-making.
We introduce a meta-gradient descent process by which an agent learns what predictions to make, 2) the estimates for its chosen predictions, and 3) how to use those estimates to generate policies that maximize future reward.
arXiv Detail & Related papers (2022-06-13T21:31:06Z) - Principled Knowledge Extrapolation with GANs [92.62635018136476]
We study counterfactual synthesis from a new perspective of knowledge extrapolation.
We show that an adversarial game with a closed-form discriminator can be used to address the knowledge extrapolation problem.
Our method enjoys both elegant theoretical guarantees and superior performance in many scenarios.
arXiv Detail & Related papers (2022-05-21T08:39:42Z) - Investigating User Radicalization: A Novel Dataset for Identifying
Fine-Grained Temporal Shifts in Opinion [7.028604573959653]
We introduce an innovative annotated dataset for modeling subtle opinion fluctuations and detecting fine-grained stances.
The dataset includes a sufficient amount of stance polarity and intensity labels per user over time and within entire conversational threads.
All posts are annotated by non-experts and a significant portion of the data is also annotated by experts.
arXiv Detail & Related papers (2022-04-16T09:31:25Z) - Heterogeneous-Agent Trajectory Forecasting Incorporating Class
Uncertainty [54.88405167739227]
We present HAICU, a method for heterogeneous-agent trajectory forecasting that explicitly incorporates agents' class probabilities.
We additionally present PUP, a new challenging real-world autonomous driving dataset.
We demonstrate that incorporating class probabilities in trajectory forecasting significantly improves performance in the face of uncertainty.
arXiv Detail & Related papers (2021-04-26T10:28:34Z) - Blind Exploration and Exploitation of Stochastic Experts [7.106986689736826]
We present blind exploration and exploitation (BEE) algorithms for identifying the most reliable expert based on formulations that employ posterior sampling, upper-confidence bounds, empirical Kullback-Leibler divergence, and minmax methods for the multi-armed bandit problem.
We propose an empirically realizable measure of expert competence that can be instantaneously using only the opinions of other experts.
arXiv Detail & Related papers (2021-04-02T15:02:02Z) - Gaussian Experts Selection using Graphical Models [7.530615321587948]
Local approximations reduce time complexity by dividing the original dataset into subsets and training a local expert on each subset.
We leverage techniques from the literature on undirected graphical models, using sparse precision matrices that encode conditional dependencies between experts to select the most important experts.
arXiv Detail & Related papers (2021-02-02T14:12:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.