Robust forecast aggregation via additional queries
- URL: http://arxiv.org/abs/2512.05271v1
- Date: Thu, 04 Dec 2025 21:50:20 GMT
- Title: Robust forecast aggregation via additional queries
- Authors: Rafael Frongillo, Mary Monroe, Eric Neyman, Bo Waggoner,
- Abstract summary: We introduce a more general framework that allows the principal to elicit richer information from experts through structured queries.<n>Our framework ensures that experts will truthfully report their underlying beliefs, and also enables us to define notions of complexity over the difficulty of asking these queries.
- Score: 7.938872675793811
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the problem of robust forecast aggregation: combining expert forecasts with provable accuracy guarantees compared to the best possible aggregation of the underlying information. Prior work shows strong impossibility results, e.g. that even under natural assumptions, no aggregation of the experts' individual forecasts can outperform simply following a random expert (Neyman and Roughgarden, 2022). In this paper, we introduce a more general framework that allows the principal to elicit richer information from experts through structured queries. Our framework ensures that experts will truthfully report their underlying beliefs, and also enables us to define notions of complexity over the difficulty of asking these queries. Under a general model of independent but overlapping expert signals, we show that optimal aggregation is achievable in the worst case with each complexity measure bounded above by the number of agents $n$. We further establish tight tradeoffs between accuracy and query complexity: aggregation error decreases linearly with the number of queries, and vanishes when the "order of reasoning" and number of agents relevant to a query is $ω(\sqrt{n})$. These results demonstrate that modest extensions to the space of expert queries dramatically strengthen the power of robust forecast aggregation. We therefore expect that our new query framework will open up a fruitful line of research in this area.
Related papers
- ROG: Retrieval-Augmented LLM Reasoning for Complex First-Order Queries over Knowledge Graphs [14.25887925588904]
We propose a retrieval-augmented framework that combines query-aware neighborhood retrieval with large language model (LLM) chain-of-thought reasoning.<n>ROG decomposes a multi-operator query into a sequence of single-operator sub-queries.<n> Intermediate answer sets are cached and reused across steps, improving consistency on deep reasoning chains.
arXiv Detail & Related papers (2026-02-02T17:45:43Z) - Reasoning-enhanced Query Understanding through Decomposition and Interpretation [87.56450566014625]
ReDI is a Reasoning-enhanced approach for query understanding through Decomposition and Interpretation.<n>We compiled a large-scale dataset of real-world complex queries from a major search engine.<n> Experiments on BRIGHT and BEIR demonstrate that ReDI consistently surpasses strong baselines in both sparse and dense retrieval paradigms.
arXiv Detail & Related papers (2025-09-08T10:58:42Z) - Uncovering the Limitations of Query Performance Prediction: Failures, Insights, and Implications for Selective Query Processing [3.463527836552468]
This paper provides a comprehensive evaluation of state-of-the-art QPPs (e.g. NQC, UQC)<n>We use diverse sparse rankers (BM25, DFree without and with query expansion) and hybrid or dense (SPLADE and ColBert) rankers and diverse test collections ROBUST, GOV2, WT10G, and MS MARCO.<n>Results show significant variability in predictors accuracy, with collections as the main factor and rankers next.
arXiv Detail & Related papers (2025-04-01T18:18:21Z) - Convergence Rates for Softmax Gating Mixture of Experts [78.3687645289918]
Mixture of experts (MoE) has emerged as an effective framework to advance the efficiency and scalability of machine learning models.<n>Central to the success of MoE is an adaptive softmax gating mechanism which takes responsibility for determining the relevance of each expert to a given input and then dynamically assigning experts their respective weights.<n>We perform a convergence analysis of parameter estimation and expert estimation under the MoE equipped with the standard softmax gating or its variants, including a dense-to-sparse gating and a hierarchical softmax gating.
arXiv Detail & Related papers (2025-03-05T06:11:24Z) - The Computational Complexity of Circuit Discovery for Inner Interpretability [0.30723404270319693]
We study circuit discovery with classical and parameterized computational complexity theory.<n>Our findings reveal a challenging complexity landscape.<n>This framework allows us to understand the scope and limits of interpretability queries.
arXiv Detail & Related papers (2024-10-10T15:22:48Z) - Groupwise Query Specialization and Quality-Aware Multi-Assignment for Transformer-based Visual Relationship Detection [21.352923995507595]
Visual Relationship Detection (VRD) has seen significant advancements with Transformer-based architectures recently.
We identify two key limitations in a conventional label assignment for training Transformer-based VRD models.
Groupwise Query and Quality-Aware Multi-Assignment (SpeaQ) are proposed to address these issues.
arXiv Detail & Related papers (2024-03-26T13:56:34Z) - Generalization Error Analysis for Sparse Mixture-of-Experts: A Preliminary Study [65.11303133775857]
Mixture-of-Experts (MoE) computation amalgamates predictions from several specialized sub-models (referred to as experts)
Sparse MoE selectively engages only a limited number, or even just one expert, significantly reducing overhead while empirically preserving, and sometimes even enhancing, performance.
arXiv Detail & Related papers (2024-03-26T05:48:02Z) - Robust Decision Aggregation with Adversarial Experts [4.021926055330022]
We consider a robust aggregation problem in the presence of both truthful and adversarial experts.<n>We aim to find the optimal aggregator that outputs a forecast minimizing regret under the worst information structure and adversarial strategies.
arXiv Detail & Related papers (2024-03-13T03:47:08Z) - Active Ranking of Experts Based on their Performances in Many Tasks [72.96112117037465]
We consider the problem of ranking n experts based on their performances on d tasks.
We make a monotonicity assumption stating that for each pair of experts, one outperforms the other on all tasks.
arXiv Detail & Related papers (2023-06-05T06:55:39Z) - Getting MoRE out of Mixture of Language Model Reasoning Experts [71.61176122960464]
We propose a Mixture-of-Reasoning-Experts (MoRE) framework that ensembles diverse specialized language models.
We specialize the backbone language model with prompts optimized for different reasoning categories, including factual, multihop, mathematical, and commonsense reasoning.
Our human study confirms that presenting expert predictions and the answer selection process helps annotators more accurately calibrate when to trust the system's output.
arXiv Detail & Related papers (2023-05-24T02:00:51Z) - Rethinking Complex Queries on Knowledge Graphs with Neural Link Predictors [58.340159346749964]
We propose a new neural-symbolic method to support end-to-end learning using complex queries with provable reasoning capability.
We develop a new dataset containing ten new types of queries with features that have never been considered.
Our method outperforms previous methods significantly in the new dataset and also surpasses previous methods in the existing dataset at the same time.
arXiv Detail & Related papers (2023-04-14T11:35:35Z) - Are You Smarter Than a Random Expert? The Robust Aggregation of
Substitutable Signals [14.03122229316614]
This paper initiates the study of forecast aggregation in a context where experts' knowledge is chosen adversarially from a broad class of information structures.
Under the projective substitutes condition, taking the average of the experts' forecasts improves substantially upon the strategy of trusting a random expert.
We show that by averaging the experts' forecasts and then emphextremizing the average by moving it away from the prior by a constant factor, the aggregator's performance guarantee is substantially better than is possible without knowledge of the prior.
arXiv Detail & Related papers (2021-11-04T20:50:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.