Unsupervised Opinion Aggregation -- A Statistical Perspective
- URL: http://arxiv.org/abs/2308.10386v1
- Date: Sun, 20 Aug 2023 23:14:52 GMT
- Title: Unsupervised Opinion Aggregation -- A Statistical Perspective
- Authors: Noyan C. Sevuktekin and Andrew C. Singer
- Abstract summary: Complex decision-making systems rely on opinions to form an understanding of what the ground truth could be.
This paper explores a statistical approach to infer the competence of each expert based on their opinions without any need for the ground truth.
- Score: 5.665646276894791
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Complex decision-making systems rarely have direct access to the current
state of the world and they instead rely on opinions to form an understanding
of what the ground truth could be. Even in problems where experts provide
opinions without any intention to manipulate the decision maker, it is
challenging to decide which expert's opinion is more reliable -- a challenge
that is further amplified when decision-maker has limited, delayed, or no
access to the ground truth after the fact. This paper explores a statistical
approach to infer the competence of each expert based on their opinions without
any need for the ground truth. Echoing the logic behind what is commonly
referred to as \textit{the wisdom of crowds}, we propose measuring the
competence of each expert by their likeliness to agree with their peers. We
further show that the more reliable an expert is the more likely it is that
they agree with their peers. We leverage this fact to propose a completely
unsupervised version of the na\"{i}ve Bayes classifier and show that the
proposed technique is asymptotically optimal for a large class of problems. In
addition to aggregating a large block of opinions, we further apply our
technique for online opinion aggregation and for decision-making based on a
limited the number of opinions.
Related papers
- Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Robust Decision Aggregation with Adversarial Experts [4.751372843411884]
We consider a binary decision aggregation problem in the presence of both truthful and adversarial experts.
We find the optimal aggregator minimizing regret under the worst information structure.
arXiv Detail & Related papers (2024-03-13T03:47:08Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - ChoiceMates: Supporting Unfamiliar Online Decision-Making with
Multi-Agent Conversational Interactions [58.71970923420007]
We present ChoiceMates, a system that enables conversations with a dynamic set of LLM-powered agents.
Agents, as opinionated personas, flexibly join the conversation, not only providing responses but also conversing among themselves to elicit each agent's preferences.
Our study (n=36) comparing ChoiceMates to conventional web search and single-agent showed that ChoiceMates was more helpful in discovering, diving deeper, and managing information compared to Web with higher confidence.
arXiv Detail & Related papers (2023-10-02T16:49:39Z) - On solving decision and risk management problems subject to uncertainty [91.3755431537592]
Uncertainty is a pervasive challenge in decision and risk management.
This paper develops a systematic understanding of such strategies, determine their range of application, and develop a framework to better employ them.
arXiv Detail & Related papers (2023-01-18T19:16:23Z) - From prediction markets to interpretable collective intelligence [0.0]
We create a system that elicits, from an arbitrary group of experts, the probability of the truth of an arbitrary logical proposition.
We argue for the possibility of the development of a self-resolving prediction market with play money that incentivizes direct information exchange between experts.
arXiv Detail & Related papers (2022-04-28T11:44:29Z) - Towards Collaborative Question Answering: A Preliminary Study [63.91687114660126]
We propose CollabQA, a novel QA task in which several expert agents coordinated by a moderator work together to answer questions that cannot be answered with any single agent alone.
We make a synthetic dataset of a large knowledge graph that can be distributed to experts.
We show that the problem can be challenging without introducing prior to the collaboration structure, unless experts are perfect and uniform.
arXiv Detail & Related papers (2022-01-24T14:27:00Z) - Are You Smarter Than a Random Expert? The Robust Aggregation of
Substitutable Signals [14.03122229316614]
This paper initiates the study of forecast aggregation in a context where experts' knowledge is chosen adversarially from a broad class of information structures.
Under the projective substitutes condition, taking the average of the experts' forecasts improves substantially upon the strategy of trusting a random expert.
We show that by averaging the experts' forecasts and then emphextremizing the average by moving it away from the prior by a constant factor, the aggregator's performance guarantee is substantially better than is possible without knowledge of the prior.
arXiv Detail & Related papers (2021-11-04T20:50:30Z) - A Machine Learning Framework Towards Transparency in Experts' Decision
Quality [0.0]
In many important settings, transparency in experts' decision quality is rarely possible because ground truth data for evaluating the experts' decisions is costly and available only for a limited set of decisions.
We first formulate the problem of estimating experts' decision accuracy in this setting and then develop a machine-learning-based framework to address it.
Our method effectively leverages both abundant historical data on workers' past decisions, and scarce decision instances with ground truth information.
arXiv Detail & Related papers (2021-10-21T18:50:40Z) - Dealing with Expert Bias in Collective Decision-Making [4.588028371034406]
We propose a new algorithmic approach based on contextual multi-armed bandit problems (CMAB) to identify and counteract biased expertises.
Our novel CMAB-inspired approach achieves a higher final performance and does so while converging more rapidly than previous adaptive algorithms.
arXiv Detail & Related papers (2021-06-25T10:17:37Z) - Decision Rule Elicitation for Domain Adaptation [93.02675868486932]
Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels from experts.
In this work, we allow experts to additionally produce decision rules describing their decision-making.
We show that decision rule elicitation improves domain adaptation of the algorithm and helps to propagate expert's knowledge to the AI model.
arXiv Detail & Related papers (2021-02-23T08:07:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.