From prediction markets to interpretable collective intelligence
- URL: http://arxiv.org/abs/2204.13424v3
- Date: Fri, 1 Sep 2023 16:37:50 GMT
- Title: From prediction markets to interpretable collective intelligence
- Authors: Alexey V. Osipov, Nikolay N. Osipov
- Abstract summary: We create a system that elicits, from an arbitrary group of experts, the probability of the truth of an arbitrary logical proposition.
We argue for the possibility of the development of a self-resolving prediction market with play money that incentivizes direct information exchange between experts.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We outline how to create a mechanism that provides an optimal way to elicit,
from an arbitrary group of experts, the probability of the truth of an
arbitrary logical proposition together with collective information that has an
explicit form and interprets this probability. Namely, we provide strong
arguments for the possibility of the development of a self-resolving prediction
market with play money that incentivizes direct information exchange between
experts. Such a system could, in particular, motivate simultaneously many
experts to collectively solve scientific or medical problems in a very
efficient manner. We also note that in our considerations, experts are not
assumed to be Bayesian.
Related papers
- Performative Prediction on Games and Mechanism Design [69.7933059664256]
We study a collective risk dilemma where agents decide whether to trust predictions based on past accuracy.
As predictions shape collective outcomes, social welfare arises naturally as a metric of concern.
We show how to achieve better trade-offs and use them for mechanism design.
arXiv Detail & Related papers (2024-08-09T16:03:44Z) - Generalization Error Analysis for Sparse Mixture-of-Experts: A Preliminary Study [65.11303133775857]
Mixture-of-Experts (MoE) computation amalgamates predictions from several specialized sub-models (referred to as experts)
Sparse MoE selectively engages only a limited number, or even just one expert, significantly reducing overhead while empirically preserving, and sometimes even enhancing, performance.
arXiv Detail & Related papers (2024-03-26T05:48:02Z) - Robust Decision Aggregation with Adversarial Experts [4.751372843411884]
We consider a binary decision aggregation problem in the presence of both truthful and adversarial experts.
We find the optimal aggregator minimizing regret under the worst information structure.
arXiv Detail & Related papers (2024-03-13T03:47:08Z) - Defining Expertise: Applications to Treatment Effect Estimation [58.7977683502207]
We argue that expertise - particularly the type of expertise the decision-makers of a domain are likely to have - can be informative in designing and selecting methods for treatment effect estimation.
We define two types of expertise, predictive and prognostic, and demonstrate empirically that: (i) the prominent type of expertise in a domain significantly influences the performance of different methods in treatment effect estimation, and (ii) it is possible to predict the type of expertise present in a dataset.
arXiv Detail & Related papers (2024-03-01T17:30:49Z) - Unsupervised Opinion Aggregation -- A Statistical Perspective [5.665646276894791]
Complex decision-making systems rely on opinions to form an understanding of what the ground truth could be.
This paper explores a statistical approach to infer the competence of each expert based on their opinions without any need for the ground truth.
arXiv Detail & Related papers (2023-08-20T23:14:52Z) - What Should I Know? Using Meta-gradient Descent for Predictive Feature
Discovery in a Single Stream of Experience [63.75363908696257]
computational reinforcement learning seeks to construct an agent's perception of the world through predictions of future sensations.
An open challenge in this line of work is determining from the infinitely many predictions that the agent could possibly make which predictions might best support decision-making.
We introduce a meta-gradient descent process by which an agent learns what predictions to make, 2) the estimates for its chosen predictions, and 3) how to use those estimates to generate policies that maximize future reward.
arXiv Detail & Related papers (2022-06-13T21:31:06Z) - Counterfactual Inference of Second Opinions [13.93477033094828]
Automated decision support systems that are able to infer second opinions from experts can potentially facilitate a more efficient allocation of resources.
This paper looks at the design of this type of support systems from the perspective of counterfactual inference.
Experiments on both synthetic and real data show that our model can be used to infer second opinions more accurately than its non-causal counterpart.
arXiv Detail & Related papers (2022-03-16T14:40:41Z) - Improving Expert Predictions with Conformal Prediction [14.850555720410677]
existing systems typically require experts to understand when to cede agency to the system or when to exercise their own agency.
We develop an automated decision support system that allows experts to make more accurate predictions and is robust to the accuracy of the predictor relies on.
arXiv Detail & Related papers (2022-01-28T09:35:37Z) - Are You Smarter Than a Random Expert? The Robust Aggregation of
Substitutable Signals [14.03122229316614]
This paper initiates the study of forecast aggregation in a context where experts' knowledge is chosen adversarially from a broad class of information structures.
Under the projective substitutes condition, taking the average of the experts' forecasts improves substantially upon the strategy of trusting a random expert.
We show that by averaging the experts' forecasts and then emphextremizing the average by moving it away from the prior by a constant factor, the aggregator's performance guarantee is substantially better than is possible without knowledge of the prior.
arXiv Detail & Related papers (2021-11-04T20:50:30Z) - Test-time Collective Prediction [73.74982509510961]
Multiple parties in machine learning want to jointly make predictions on future test points.
Agents wish to benefit from the collective expertise of the full set of agents, but may not be willing to release their data or model parameters.
We explore a decentralized mechanism to make collective predictions at test time, leveraging each agent's pre-trained model.
arXiv Detail & Related papers (2021-06-22T18:29:58Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.