Unsupervised Opinion Aggregation -- A Statistical Perspective
- URL: http://arxiv.org/abs/2308.10386v1
- Date: Sun, 20 Aug 2023 23:14:52 GMT
- Title: Unsupervised Opinion Aggregation -- A Statistical Perspective
- Authors: Noyan C. Sevuktekin and Andrew C. Singer
- Abstract summary: Complex decision-making systems rely on opinions to form an understanding of what the ground truth could be.
This paper explores a statistical approach to infer the competence of each expert based on their opinions without any need for the ground truth.
- Score: 5.665646276894791
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Complex decision-making systems rarely have direct access to the current
state of the world and they instead rely on opinions to form an understanding
of what the ground truth could be. Even in problems where experts provide
opinions without any intention to manipulate the decision maker, it is
challenging to decide which expert's opinion is more reliable -- a challenge
that is further amplified when decision-maker has limited, delayed, or no
access to the ground truth after the fact. This paper explores a statistical
approach to infer the competence of each expert based on their opinions without
any need for the ground truth. Echoing the logic behind what is commonly
referred to as \textit{the wisdom of crowds}, we propose measuring the
competence of each expert by their likeliness to agree with their peers. We
further show that the more reliable an expert is the more likely it is that
they agree with their peers. We leverage this fact to propose a completely
unsupervised version of the na\"{i}ve Bayes classifier and show that the
proposed technique is asymptotically optimal for a large class of problems. In
addition to aggregating a large block of opinions, we further apply our
technique for online opinion aggregation and for decision-making based on a
limited the number of opinions.
Related papers
- Thoughts Are All Over the Place: On the Underthinking of o1-Like LLMs [86.79757571440082]
Large language models (LLMs) such as OpenAI's o1 have demonstrated remarkable abilities in complex reasoning tasks.
We identify a phenomenon we term underthinking, where o1-like LLMs frequently switch between different reasoning thoughts.
We propose a decoding strategy with thought switching penalty TIP that discourages premature transitions between thoughts.
arXiv Detail & Related papers (2025-01-30T18:58:18Z) - Optimal bounds for dissatisfaction in perpetual voting [84.02572742131521]
We consider a perpetual approval voting method that guarantees that no voter is dissatisfied too many times.
We identify a sufficient condition on voter behavior under which a sublinear growth of dissatisfaction is possible.
We present a voting method with sublinear guarantees on dissatisfaction under bounded conflicts, based on the standard techniques from prediction with expert advice.
arXiv Detail & Related papers (2024-12-20T19:58:55Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Learning To Guide Human Decision Makers With Vision-Language Models [17.957952996809716]
There is increasing interest in developing AIs for assisting human decision-making in high-stakes tasks, such as medical diagnosis.
We introduce learning to guide (LTG), an alternative framework in which - rather than taking control from the human expert - the machine provides guidance.
In order to ensure guidance is interpretable, we develop SLOG, an approach for turning any vision-language model into a capable generator of textual guidance.
arXiv Detail & Related papers (2024-03-25T07:34:42Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - On solving decision and risk management problems subject to uncertainty [91.3755431537592]
Uncertainty is a pervasive challenge in decision and risk management.
This paper develops a systematic understanding of such strategies, determine their range of application, and develop a framework to better employ them.
arXiv Detail & Related papers (2023-01-18T19:16:23Z) - From prediction markets to interpretable collective intelligence [0.0]
We create a system that elicits, from an arbitrary group of experts, the probability of the truth of an arbitrary logical proposition.
We argue for the possibility of the development of a self-resolving prediction market with play money that incentivizes direct information exchange between experts.
arXiv Detail & Related papers (2022-04-28T11:44:29Z) - Are You Smarter Than a Random Expert? The Robust Aggregation of
Substitutable Signals [14.03122229316614]
This paper initiates the study of forecast aggregation in a context where experts' knowledge is chosen adversarially from a broad class of information structures.
Under the projective substitutes condition, taking the average of the experts' forecasts improves substantially upon the strategy of trusting a random expert.
We show that by averaging the experts' forecasts and then emphextremizing the average by moving it away from the prior by a constant factor, the aggregator's performance guarantee is substantially better than is possible without knowledge of the prior.
arXiv Detail & Related papers (2021-11-04T20:50:30Z) - A Machine Learning Framework Towards Transparency in Experts' Decision
Quality [0.0]
In many important settings, transparency in experts' decision quality is rarely possible because ground truth data for evaluating the experts' decisions is costly and available only for a limited set of decisions.
We first formulate the problem of estimating experts' decision accuracy in this setting and then develop a machine-learning-based framework to address it.
Our method effectively leverages both abundant historical data on workers' past decisions, and scarce decision instances with ground truth information.
arXiv Detail & Related papers (2021-10-21T18:50:40Z) - Dealing with Expert Bias in Collective Decision-Making [4.588028371034406]
We propose a new algorithmic approach based on contextual multi-armed bandit problems (CMAB) to identify and counteract biased expertises.
Our novel CMAB-inspired approach achieves a higher final performance and does so while converging more rapidly than previous adaptive algorithms.
arXiv Detail & Related papers (2021-06-25T10:17:37Z) - Decision Rule Elicitation for Domain Adaptation [93.02675868486932]
Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels from experts.
In this work, we allow experts to additionally produce decision rules describing their decision-making.
We show that decision rule elicitation improves domain adaptation of the algorithm and helps to propagate expert's knowledge to the AI model.
arXiv Detail & Related papers (2021-02-23T08:07:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.