Robust Decision Aggregation with Adversarial Experts
- URL: http://arxiv.org/abs/2403.08222v1
- Date: Wed, 13 Mar 2024 03:47:08 GMT
- Title: Robust Decision Aggregation with Adversarial Experts
- Authors: Yongkang Guo, Yuqing Kong
- Abstract summary: We consider a binary decision aggregation problem in the presence of both truthful and adversarial experts.
We find the optimal aggregator minimizing regret under the worst information structure.
- Score: 4.751372843411884
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider a binary decision aggregation problem in the presence of both
truthful and adversarial experts. The truthful experts will report their
private signals truthfully with proper incentive, while the adversarial experts
can report arbitrarily. The decision maker needs to design a robust aggregator
to forecast the true state of the world based on the reports of experts. The
decision maker does not know the specific information structure, which is a
joint distribution of signals, states, and strategies of adversarial experts.
We want to find the optimal aggregator minimizing regret under the worst
information structure. The regret is defined by the difference in expected loss
between the aggregator and a benchmark who makes the optimal decision given the
joint distribution and reports of truthful experts.
We prove that when the truthful experts are symmetric and adversarial experts
are not too numerous, the truncated mean is optimal, which means that we remove
some lowest reports and highest reports and take averaging among the left
reports. Moreover, for many settings, the optimal aggregators are in the family
of piecewise linear functions. The regret is independent of the total number of
experts but only depends on the ratio of adversaries. We evaluate our
aggregators by numerical experiment in an ensemble learning task. We also
obtain some negative results for the aggregation problem with adversarial
experts under some more general information structures and experts' report
space.
Related papers
- Generalization Error Analysis for Sparse Mixture-of-Experts: A Preliminary Study [65.11303133775857]
Mixture-of-Experts (MoE) computation amalgamates predictions from several specialized sub-models (referred to as experts)
Sparse MoE selectively engages only a limited number, or even just one expert, significantly reducing overhead while empirically preserving, and sometimes even enhancing, performance.
arXiv Detail & Related papers (2024-03-26T05:48:02Z) - Inverse Reinforcement Learning with Sub-optimal Experts [56.553106680769474]
We study the theoretical properties of the class of reward functions that are compatible with a given set of experts.
Our results show that the presence of multiple sub-optimal experts can significantly shrink the set of compatible rewards.
We analyze a uniform sampling algorithm that results in being minimax optimal whenever the sub-optimal experts' performance level is sufficiently close to the one of the optimal agent.
arXiv Detail & Related papers (2024-01-08T12:39:25Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy [84.11508381847929]
Sparsely activated Mixture-of-Experts (SMoE) has shown promise to scale up the learning capacity of neural networks.
We propose M-SMoE, which leverages routing statistics to guide expert merging.
Our MC-SMoE achieves up to 80% memory and a 20% FLOPs reduction, with virtually no loss in performance.
arXiv Detail & Related papers (2023-10-02T16:51:32Z) - Unsupervised Opinion Aggregation -- A Statistical Perspective [5.665646276894791]
Complex decision-making systems rely on opinions to form an understanding of what the ground truth could be.
This paper explores a statistical approach to infer the competence of each expert based on their opinions without any need for the ground truth.
arXiv Detail & Related papers (2023-08-20T23:14:52Z) - Active Ranking of Experts Based on their Performances in Many Tasks [72.96112117037465]
We consider the problem of ranking n experts based on their performances on d tasks.
We make a monotonicity assumption stating that for each pair of experts, one outperforms the other on all tasks.
arXiv Detail & Related papers (2023-06-05T06:55:39Z) - Investigating User Radicalization: A Novel Dataset for Identifying
Fine-Grained Temporal Shifts in Opinion [7.028604573959653]
We introduce an innovative annotated dataset for modeling subtle opinion fluctuations and detecting fine-grained stances.
The dataset includes a sufficient amount of stance polarity and intensity labels per user over time and within entire conversational threads.
All posts are annotated by non-experts and a significant portion of the data is also annotated by experts.
arXiv Detail & Related papers (2022-04-16T09:31:25Z) - Are You Smarter Than a Random Expert? The Robust Aggregation of
Substitutable Signals [14.03122229316614]
This paper initiates the study of forecast aggregation in a context where experts' knowledge is chosen adversarially from a broad class of information structures.
Under the projective substitutes condition, taking the average of the experts' forecasts improves substantially upon the strategy of trusting a random expert.
We show that by averaging the experts' forecasts and then emphextremizing the average by moving it away from the prior by a constant factor, the aggregator's performance guarantee is substantially better than is possible without knowledge of the prior.
arXiv Detail & Related papers (2021-11-04T20:50:30Z) - Gaussian Experts Selection using Graphical Models [7.530615321587948]
Local approximations reduce time complexity by dividing the original dataset into subsets and training a local expert on each subset.
We leverage techniques from the literature on undirected graphical models, using sparse precision matrices that encode conditional dependencies between experts to select the most important experts.
arXiv Detail & Related papers (2021-02-02T14:12:11Z) - Prediction with Corrupted Expert Advice [67.67399390910381]
We prove that a variant of the classical Multiplicative Weights algorithm with decreasing step sizes achieves constant regret in a benign environment.
Our results reveal a surprising disparity between the often comparable Follow the Regularized Leader (FTRL) and Online Mirror Descent (OMD) frameworks.
arXiv Detail & Related papers (2020-02-24T14:39:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.