Robust Decision Aggregation with Adversarial Experts
- URL: http://arxiv.org/abs/2403.08222v1
- Date: Wed, 13 Mar 2024 03:47:08 GMT
- Title: Robust Decision Aggregation with Adversarial Experts
- Authors: Yongkang Guo, Yuqing Kong
- Abstract summary: We consider a binary decision aggregation problem in the presence of both truthful and adversarial experts.
We find the optimal aggregator minimizing regret under the worst information structure.
- Score: 4.751372843411884
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider a binary decision aggregation problem in the presence of both
truthful and adversarial experts. The truthful experts will report their
private signals truthfully with proper incentive, while the adversarial experts
can report arbitrarily. The decision maker needs to design a robust aggregator
to forecast the true state of the world based on the reports of experts. The
decision maker does not know the specific information structure, which is a
joint distribution of signals, states, and strategies of adversarial experts.
We want to find the optimal aggregator minimizing regret under the worst
information structure. The regret is defined by the difference in expected loss
between the aggregator and a benchmark who makes the optimal decision given the
joint distribution and reports of truthful experts.
We prove that when the truthful experts are symmetric and adversarial experts
are not too numerous, the truncated mean is optimal, which means that we remove
some lowest reports and highest reports and take averaging among the left
reports. Moreover, for many settings, the optimal aggregators are in the family
of piecewise linear functions. The regret is independent of the total number of
experts but only depends on the ratio of adversaries. We evaluate our
aggregators by numerical experiment in an ensemble learning task. We also
obtain some negative results for the aggregation problem with adversarial
experts under some more general information structures and experts' report
space.
Related papers
- Mitigating the Participation Bias by Balancing Extreme Ratings [3.5785450878667597]
We consider a robust rating aggregation task under the participation bias.
Our goal is to minimize the expected squared loss between the aggregated ratings and the average of all underlying ratings.
arXiv Detail & Related papers (2025-02-06T02:58:46Z) - Heavy-tailed Contamination is Easier than Adversarial Contamination [8.607294463464523]
A body of work in the statistics and computer science communities dating back to Huber (Huber, 1960) has led to statistically and computationally efficient outlier-robust estimators.
Two particular outlier models have received significant attention: the adversarial and heavy-tailed models.
arXiv Detail & Related papers (2024-11-22T19:00:33Z) - The Surprising Benefits of Base Rate Neglect in Robust Aggregation [14.286448842405678]
Our work considers experts who tend to ignore the base rate.
We find that a certain degree of base rate neglect helps with robust forecast aggregation.
arXiv Detail & Related papers (2024-06-19T12:20:29Z) - Nearly Optimal Algorithms for Contextual Dueling Bandits from Adversarial Feedback [58.66941279460248]
Learning from human feedback plays an important role in aligning generative models, such as large language models (LLM)
We study a model within this problem domain--contextual dueling bandits with adversarial feedback, where the true preference label can be flipped by an adversary.
We propose an algorithm namely robust contextual dueling bandit (algo), which is based on uncertainty-weighted maximum likelihood estimation.
arXiv Detail & Related papers (2024-04-16T17:59:55Z) - Robust Decision Aggregation with Second-order Information [4.021926055330022]
We consider a decision aggregation problem with two experts who each make a binary recommendation after observing a private signal.
An agent, who does not know the joint information structure between signals and states, sees the experts' recommendations and aims to match the action with the true state.
Under the scenario, we study whether supplemented additionally with second-order information could enable a better aggregation.
arXiv Detail & Related papers (2023-11-23T16:39:55Z) - Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy [84.11508381847929]
Sparsely activated Mixture-of-Experts (SMoE) has shown promise to scale up the learning capacity of neural networks.
We propose M-SMoE, which leverages routing statistics to guide expert merging.
Our MC-SMoE achieves up to 80% memory and a 20% FLOPs reduction, with virtually no loss in performance.
arXiv Detail & Related papers (2023-10-02T16:51:32Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z) - Are You Smarter Than a Random Expert? The Robust Aggregation of
Substitutable Signals [14.03122229316614]
This paper initiates the study of forecast aggregation in a context where experts' knowledge is chosen adversarially from a broad class of information structures.
Under the projective substitutes condition, taking the average of the experts' forecasts improves substantially upon the strategy of trusting a random expert.
We show that by averaging the experts' forecasts and then emphextremizing the average by moving it away from the prior by a constant factor, the aggregator's performance guarantee is substantially better than is possible without knowledge of the prior.
arXiv Detail & Related papers (2021-11-04T20:50:30Z) - Examining and Combating Spurious Features under Distribution Shift [94.31956965507085]
We define and analyze robust and spurious representations using the information-theoretic concept of minimal sufficient statistics.
We prove that even when there is only bias of the input distribution, models can still pick up spurious features from their training data.
Inspired by our analysis, we demonstrate that group DRO can fail when groups do not directly account for various spurious correlations.
arXiv Detail & Related papers (2021-06-14T05:39:09Z) - Towards an Understanding of Benign Overfitting in Neural Networks [104.2956323934544]
Modern machine learning models often employ a huge number of parameters and are typically optimized to have zero training loss.
We examine how these benign overfitting phenomena occur in a two-layer neural network setting.
We show that it is possible for the two-layer ReLU network interpolator to achieve a near minimax-optimal learning rate.
arXiv Detail & Related papers (2021-06-06T19:08:53Z) - Malicious Experts versus the multiplicative weights algorithm in online
prediction [85.62472761361107]
We consider a prediction problem with two experts and a forecaster.
We assume that one of the experts is honest and makes correct prediction with probability $mu$ at each round.
The other one is malicious, who knows true outcomes at each round and makes predictions in order to maximize the loss of the forecaster.
arXiv Detail & Related papers (2020-03-18T20:12:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.