Lookahead and Hybrid Sample Allocation Procedures for Multiple Attribute
Selection Decisions
- URL: http://arxiv.org/abs/2007.16119v1
- Date: Fri, 31 Jul 2020 15:04:49 GMT
- Title: Lookahead and Hybrid Sample Allocation Procedures for Multiple Attribute
Selection Decisions
- Authors: Jeffrey W. Herrmann and Kunal Mehta
- Abstract summary: This paper considers settings in which each measurement yields one sample of one attribute for one alternative.
When given a fixed number of samples to collect, the decision-maker must determine which samples to obtain, make the measurements, update prior beliefs about the attribute magnitudes, and then select an alternative.
- Score: 0.9137554315375922
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Attributes provide critical information about the alternatives that a
decision-maker is considering. When their magnitudes are uncertain, the
decision-maker may be unsure about which alternative is truly the best, so
measuring the attributes may help the decision-maker make a better decision.
This paper considers settings in which each measurement yields one sample of
one attribute for one alternative. When given a fixed number of samples to
collect, the decision-maker must determine which samples to obtain, make the
measurements, update prior beliefs about the attribute magnitudes, and then
select an alternative. This paper presents the sample allocation problem for
multiple attribute selection decisions and proposes two sequential, lookahead
procedures for the case in which discrete distributions are used to model the
uncertain attribute magnitudes. The two procedures are similar but reflect
different quality measures (and loss functions), which motivate different
decision rules: (1) select the alternative with the greatest expected utility
and (2) select the alternative that is most likely to be the truly best
alternative. We conducted a simulation study to evaluate the performance of the
sequential procedures and hybrid procedures that first allocate some samples
using a uniform allocation procedure and then use the sequential, lookahead
procedure. The results indicate that the hybrid procedures are effective;
allocating many (but not all) of the initial samples with the uniform
allocation procedure not only reduces overall computational effort but also
selects alternatives that have lower average opportunity cost and are more
often truly best.
Related papers
- An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting [53.36437745983783]
We first construct a max-margin optimization-based model to model potentially non-monotonic preferences.
We devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration.
Two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences.
arXiv Detail & Related papers (2024-09-04T14:36:20Z) - Feature Selection as Deep Sequential Generative Learning [50.00973409680637]
We develop a deep variational transformer model over a joint of sequential reconstruction, variational, and performance evaluator losses.
Our model can distill feature selection knowledge and learn a continuous embedding space to map feature selection decision sequences into embedding vectors associated with utility scores.
arXiv Detail & Related papers (2024-03-06T16:31:56Z) - Large Language Models Are Not Robust Multiple Choice Selectors [117.72712117510953]
Multiple choice questions (MCQs) serve as a common yet important task format in the evaluation of large language models (LLMs)
This work shows that modern LLMs are vulnerable to option position changes due to their inherent "selection bias"
We propose a label-free, inference-time debiasing method, called PriDe, which separates the model's prior bias for option IDs from the overall prediction distribution.
arXiv Detail & Related papers (2023-09-07T17:44:56Z) - Domain Generalization via Rationale Invariance [70.32415695574555]
This paper offers a new perspective to ease the challenge of domain generalization, which involves maintaining robust results even in unseen environments.
We propose treating the element-wise contributions to the final results as the rationale for making a decision and representing the rationale for each sample as a matrix.
Our experiments demonstrate that the proposed approach achieves competitive results across various datasets, despite its simplicity.
arXiv Detail & Related papers (2023-08-22T03:31:40Z) - Improving Probability-based Prompt Selection Through Unified Evaluation
and Analysis [52.04932081106623]
We propose a unified framework to interpret and evaluate the existing probability-based prompt selection methods.
We find that each of the existing methods can be interpreted as some variant of the method that maximizes mutual information between the input and the predicted output (MI)
We propose a novel calibration method called by Marginalization (CBM) that is to the existing methods and helps increase the prompt selection effectiveness of the best method to 96.85%, achieving 99.44% of the oracle prompt F1 without calibration.
arXiv Detail & Related papers (2023-05-24T08:29:50Z) - Bi-objective Ranking and Selection Using Stochastic Kriging [0.0]
We consider bi-objective ranking and selection problems in which the two objective outcomes have been observed with uncertainty.
We propose a novel Bayesian bi-objective ranking and selection method that sequentially allocates extra samples to competitive solutions.
Experimental results show that the proposed method outperforms the standard allocation method, as well as a well-known state-of-the-art algorithm.
arXiv Detail & Related papers (2022-09-05T23:51:07Z) - Meta-Learning Approaches for a One-Shot Collective-Decision Aggregation:
Correctly Choosing how to Choose Correctly [0.7874708385247353]
We present two one-shot machine-learning-based aggregation approaches.
The first predicts, given multiple features about the collective's choices, which aggregation method will be best for a given case.
The second directly predicts which decision is optimal, given, among other things, the selection made by each method.
arXiv Detail & Related papers (2022-04-03T15:06:59Z) - SelectAugment: Hierarchical Deterministic Sample Selection for Data
Augmentation [72.58308581812149]
We propose an effective approach, dubbed SelectAugment, to select samples to be augmented in a deterministic and online manner.
Specifically, in each batch, we first determine the augmentation ratio, and then decide whether to augment each training sample under this ratio.
In this way, the negative effects of the randomness in selecting samples to augment can be effectively alleviated and the effectiveness of DA is improved.
arXiv Detail & Related papers (2021-12-06T08:38:38Z) - Feature Selection Methods for Cost-Constrained Classification in Random
Forests [3.4806267677524896]
Cost-sensitive feature selection describes a feature selection problem, where features raise individual costs for inclusion in a model.
Random Forests define a particularly challenging problem for feature selection, as features are generally entangled in an ensemble of multiple trees.
We propose Shallow Tree Selection, a novel fast and multivariate feature selection method that selects features from small tree structures.
arXiv Detail & Related papers (2020-08-14T11:39:52Z) - Probabilistic Value Selection for Space Efficient Model [10.109875612945658]
Two probabilistic methods based on information theory's metric are proposed: PVS and P + VS.
Experiment results show that value selection can achieve the balance between accuracy and model size reduction.
arXiv Detail & Related papers (2020-07-09T08:45:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.