Heuristic Rating Estimation Method for the incomplete pairwise
comparisons matrices
- URL: http://arxiv.org/abs/2207.10783v1
- Date: Thu, 21 Jul 2022 23:14:21 GMT
- Title: Heuristic Rating Estimation Method for the incomplete pairwise
comparisons matrices
- Authors: Konrad Ku{\l}akowski and Anna K\k{e}dzior
- Abstract summary: Heuristic Rating Estimation Method enables decision-makers to decide based on existing ranking data and expert comparisons.
We show how these algorithms can be extended so that the experts do not need to compare all alternatives pairwise.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Heuristic Rating Estimation Method enables decision-makers to decide
based on existing ranking data and expert comparisons. In this approach, the
ranking values of selected alternatives are known in advance, while these
values have to be calculated for the remaining ones. Their calculation can be
performed using either an additive or a multiplicative method. Both methods
assumed that the pairwise comparison sets involved in the computation were
complete. In this paper, we show how these algorithms can be extended so that
the experts do not need to compare all alternatives pairwise. Thanks to the
shortening of the work of experts, the presented, improved methods will reduce
the costs of the decision-making procedure and facilitate and shorten the stage
of collecting decision-making data.
Related papers
- Detection of decision-making manipulation in the pairwise comparisons method [0.2678472239880052]
This paper presents three simple manipulation methods in the pairwise comparison method.
We then try to detect these methods using appropriately constructed neural networks.
Experimental results accompany the proposed solutions on the generated data, showing a considerable manipulation detection level.
arXiv Detail & Related papers (2024-05-26T20:58:12Z) - Approximating Score-based Explanation Techniques Using Conformal
Regression [0.1843404256219181]
Score-based explainable machine-learning techniques are often used to understand the logic behind black-box models.
We propose and investigate the use of computationally less costly regression models for approximating the output of score-based explanation techniques, such as SHAP.
We present results from a large-scale empirical investigation, in which the approximate explanations generated by our proposed models are evaluated with respect to efficiency.
arXiv Detail & Related papers (2023-08-23T07:50:43Z) - Are metaheuristics worth it? A computational comparison between
nature-inspired and deterministic techniques on black-box optimization
problems [0.0]
In this paper, we provide an extensive computational comparison of selected methods from each of these branches.
The results showed that, when dealing with situations where the objective function evaluations are relatively cheap, the nature-inspired methods have a significantly better performance than their deterministic counterparts.
arXiv Detail & Related papers (2022-12-13T19:44:24Z) - Model-Free Reinforcement Learning with the Decision-Estimation
Coefficient [79.30248422988409]
We consider the problem of interactive decision making, encompassing structured bandits and reinforcement learning with general function approximation.
We use this approach to derive regret bounds for model-free reinforcement learning with value function approximation, and give structural results showing when it can and cannot help more generally.
arXiv Detail & Related papers (2022-11-25T17:29:40Z) - Manipulation of individual judgments in the quantitative pairwise
comparisons method [0.0]
It is commonly believed that experts (decision-makers) are honest in their judgments.
In our work, we consider a scenario in which experts are vulnerable to bribery.
arXiv Detail & Related papers (2022-11-01T22:35:00Z) - Adaptive Sampling for Heterogeneous Rank Aggregation from Noisy Pairwise
Comparisons [85.5955376526419]
In rank aggregation problems, users exhibit various accuracy levels when comparing pairs of items.
We propose an elimination-based active sampling strategy, which estimates the ranking of items via noisy pairwise comparisons.
We prove that our algorithm can return the true ranking of items with high probability.
arXiv Detail & Related papers (2021-10-08T13:51:55Z) - Estimating leverage scores via rank revealing methods and randomization [50.591267188664666]
We study algorithms for estimating the statistical leverage scores of rectangular dense or sparse matrices of arbitrary rank.
Our approach is based on combining rank revealing methods with compositions of dense and sparse randomized dimensionality reduction transforms.
arXiv Detail & Related papers (2021-05-23T19:21:55Z) - A Statistical Analysis of Summarization Evaluation Metrics using
Resampling Methods [60.04142561088524]
We find that the confidence intervals are rather wide, demonstrating high uncertainty in how reliable automatic metrics truly are.
Although many metrics fail to show statistical improvements over ROUGE, two recent works, QAEval and BERTScore, do in some evaluation settings.
arXiv Detail & Related papers (2021-03-31T18:28:14Z) - Methods of ranking for aggregated fuzzy numbers from interval-valued
data [0.0]
This paper primarily presents two methods of ranking aggregated fuzzy numbers from intervals using the Interval Agreement Approach (IAA)
The shortcomings of previous measures, along with the improvements of the proposed methods, are illustrated using both a synthetic and real-world application.
arXiv Detail & Related papers (2020-12-03T02:56:15Z) - Efficient Ensemble Model Generation for Uncertainty Estimation with
Bayesian Approximation in Segmentation [74.06904875527556]
We propose a generic and efficient segmentation framework to construct ensemble segmentation models.
In the proposed method, ensemble models can be efficiently generated by using the layer selection method.
We also devise a new pixel-wise uncertainty loss, which improves the predictive performance.
arXiv Detail & Related papers (2020-05-21T16:08:38Z) - Learning with Differentiable Perturbed Optimizers [54.351317101356614]
We propose a systematic method to transform operations into operations that are differentiable and never locally constant.
Our approach relies on perturbeds, and can be used readily together with existing solvers.
We show how this framework can be connected to a family of losses developed in structured prediction, and give theoretical guarantees for their use in learning tasks.
arXiv Detail & Related papers (2020-02-20T11:11:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.