Sample Complexity of Forecast Aggregation
- URL: http://arxiv.org/abs/2207.13126v3
- Date: Thu, 1 Jun 2023 16:45:10 GMT
- Title: Sample Complexity of Forecast Aggregation
- Authors: Yiling Chen, Tao Lin
- Abstract summary: We consider a Bayesian forecast aggregation model where $n$ experts, after observing private signals about an unknown binary event, report their posterior beliefs about the event to a principal.
The principal aggregates the reports into a single prediction for the event.
We show that the sample complexity of this problem is at least $tilde (mn-2 / varepsilon)$ for arbitrary discrete distributions.
- Score: 9.122524488932573
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider a Bayesian forecast aggregation model where $n$ experts, after
observing private signals about an unknown binary event, report their posterior
beliefs about the event to a principal, who then aggregates the reports into a
single prediction for the event. The signals of the experts and the outcome of
the event follow a joint distribution that is unknown to the principal, but the
principal has access to i.i.d. "samples" from the distribution, where each
sample is a tuple of the experts' reports (not signals) and the realization of
the event. Using these samples, the principal aims to find an
$\varepsilon$-approximately optimal aggregator, where optimality is measured in
terms of the expected squared distance between the aggregated prediction and
the realization of the event. We show that the sample complexity of this
problem is at least $\tilde \Omega(m^{n-2} / \varepsilon)$ for arbitrary
discrete distributions, where $m$ is the size of each expert's signal space.
This sample complexity grows exponentially in the number of experts $n$. But,
if the experts' signals are independent conditioned on the realization of the
event, then the sample complexity is significantly reduced, to $\tilde O(1 /
\varepsilon^2)$, which does not depend on $n$. Our results can be generalized
to non-binary events. The proof of our results uses a reduction from the
distribution learning problem and reveals the fact that forecast aggregation is
almost as difficult as distribution learning.
Related papers
- Dimension-free Private Mean Estimation for Anisotropic Distributions [55.86374912608193]
Previous private estimators on distributions over $mathRd suffer from a curse of dimensionality.
We present an algorithm whose sample complexity has improved dependence on dimension.
arXiv Detail & Related papers (2024-11-01T17:59:53Z) - Identifying General Mechanism Shifts in Linear Causal Representations [58.6238439611389]
We consider the linear causal representation learning setting where we observe a linear mixing of $d$ unknown latent factors.
Recent work has shown that it is possible to recover the latent factors as well as the underlying structural causal model over them.
We provide a surprising identifiability result that it is indeed possible, under some very mild standard assumptions, to identify the set of shifted nodes.
arXiv Detail & Related papers (2024-10-31T15:56:50Z) - Stochastic Approximation Approaches to Group Distributionally Robust
Optimization [96.26317627118912]
Group distributionally robust optimization (GDRO)
Online learning techniques to reduce the number of samples required in each round from $m$ to $1$, keeping the same sample.
A novel formulation of weighted GDRO, which allows us to derive distribution-dependent convergence rates.
arXiv Detail & Related papers (2023-02-18T09:24:15Z) - On counterfactual inference with unobserved confounding [36.18241676876348]
Given an observational study with $n$ independent but heterogeneous units, our goal is to learn the counterfactual distribution for each unit.
We introduce a convex objective that pools all $n$ samples to jointly learn all $n$ parameter vectors.
We derive sufficient conditions for compactly supported distributions to satisfy the logarithmic Sobolev inequality.
arXiv Detail & Related papers (2022-11-14T04:14:37Z) - On-Demand Sampling: Learning Optimally from Multiple Distributions [63.20009081099896]
Social and real-world considerations have given rise to multi-distribution learning paradigms.
We establish the optimal sample complexity of these learning paradigms and give algorithms that meet this sample complexity.
Our algorithm design and analysis are enabled by our extensions of online learning techniques for solving zero-sum games.
arXiv Detail & Related papers (2022-10-22T19:07:26Z) - Metric Entropy Duality and the Sample Complexity of Outcome
Indistinguishability [7.727052811126007]
In outcome indistinguishability, the goal is to output a predictor that cannot be distinguished from the target predictor.
We show that the sample complexity of outcome indistinguishability is characterized by the metric entropy of $P$ w.r.t.
This equivalence makes an intriguing connection to the long-standing metric entropy duality conjecture in convex geometry.
arXiv Detail & Related papers (2022-03-09T06:02:31Z) - A Statistical Learning View of Simple Kriging [0.0]
We analyze the simple Kriging task from a statistical learning perspective.
The goal is to predict the unknown values it takes at any other location with minimum quadratic risk.
We prove non-asymptotic bounds of order $O_mathbbP (1/sqrtn)$ for the excess risk of a plug-in predictive rule mimicking the true minimizer.
arXiv Detail & Related papers (2022-02-15T12:46:43Z) - Covariance-Aware Private Mean Estimation Without Private Covariance Estimation [10.036088581191592]
We present two sample-efficient differentially private mean estimators for $d$-dimensional (sub)Gaussian distributions.
Our estimators output $tildemu$ such that $| tildemu - mu |_Sigma leq alpha$, where $| cdot |_Sigma$ is the Mahalanobis distance.
arXiv Detail & Related papers (2021-06-24T21:40:07Z) - The Sample Complexity of Robust Covariance Testing [56.98280399449707]
We are given i.i.d. samples from a distribution of the form $Z = (1-epsilon) X + epsilon B$, where $X$ is a zero-mean and unknown covariance Gaussian $mathcalN(0, Sigma)$.
In the absence of contamination, prior work gave a simple tester for this hypothesis testing task that uses $O(d)$ samples.
We prove a sample complexity lower bound of $Omega(d2)$ for $epsilon$ an arbitrarily small constant and $gamma
arXiv Detail & Related papers (2020-12-31T18:24:41Z) - Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and
Variance Reduction [63.41789556777387]
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP)
We show that the number of samples needed to yield an entrywise $varepsilon$-accurate estimate of the Q-function is at most on the order of $frac1mu_min (1-gamma)5varepsilon2+ fract_mixmu_min (1-gamma)$ up to some logarithmic factor.
arXiv Detail & Related papers (2020-06-04T17:51:00Z) - Private Mean Estimation of Heavy-Tailed Distributions [10.176795938619417]
We give new upper and lower bounds on the minimax sample complexity of differentially private mean estimation of distributions with bounded $k$-th moments.
We show that $n = Thetaleft(frac1alpha2 + frac1alphafrackk-1varepsilonright)$ samples are necessary and sufficient to estimate the mean to $alpha$-accuracy under $varepsilon$-differential privacy, or any of its common relaxations.
arXiv Detail & Related papers (2020-02-21T18:30:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.