Manipulation of individual judgments in the quantitative pairwise
comparisons method
- URL: http://arxiv.org/abs/2211.01809v1
- Date: Tue, 1 Nov 2022 22:35:00 GMT
- Title: Manipulation of individual judgments in the quantitative pairwise
comparisons method
- Authors: M. Strada and K. Ku{\l}akowski
- Abstract summary: It is commonly believed that experts (decision-makers) are honest in their judgments.
In our work, we consider a scenario in which experts are vulnerable to bribery.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decision-making methods very often use the technique of comparing
alternatives in pairs. In this approach, experts are asked to compare different
options, and then a quantitative ranking is created from the results obtained.
It is commonly believed that experts (decision-makers) are honest in their
judgments. In our work, we consider a scenario in which experts are vulnerable
to bribery. For this purpose, we define a framework that allows us to determine
the intended manipulation and present three algorithms for achieving the
intended goal. Analyzing these algorithms may provide clues to help defend
against such attacks.
Related papers
- Integrating Expert Judgment and Algorithmic Decision Making: An Indistinguishability Framework [12.967730957018688]
We introduce a novel framework for human-AI collaboration in prediction and decision tasks.
Our approach leverages human judgment to distinguish inputs which are algorithmically indistinguishable, or "look the same" to any feasible predictive algorithm.
arXiv Detail & Related papers (2024-10-11T13:03:53Z) - Detection of decision-making manipulation in the pairwise comparisons method [0.2678472239880052]
This paper presents three simple manipulation methods in the pairwise comparison method.
We then try to detect these methods using appropriately constructed neural networks.
Experimental results accompany the proposed solutions on the generated data, showing a considerable manipulation detection level.
arXiv Detail & Related papers (2024-05-26T20:58:12Z) - An Experimental Investigation into the Evaluation of Explainability
Methods [60.54170260771932]
This work compares 14 different metrics when applied to nine state-of-the-art XAI methods and three dummy methods (e.g., random saliency maps) used as references.
Experimental results show which of these metrics produces highly correlated results, indicating potential redundancy.
arXiv Detail & Related papers (2023-05-25T08:07:07Z) - Towards secure judgments aggregation in AHP [0.0]
It is common to assume that the experts are honest and professional.
One or more experts in the group decision making framework try to manipulate results in their favor.
arXiv Detail & Related papers (2023-03-27T11:07:09Z) - Better Understanding Differences in Attribution Methods via Systematic Evaluations [57.35035463793008]
Post-hoc attribution methods have been proposed to identify image regions most influential to the models' decisions.
We propose three novel evaluation schemes to more reliably measure the faithfulness of those methods.
We use these evaluation schemes to study strengths and shortcomings of some widely used attribution methods over a wide range of models.
arXiv Detail & Related papers (2023-03-21T14:24:58Z) - Heuristic Rating Estimation Method for the incomplete pairwise
comparisons matrices [0.0]
Heuristic Rating Estimation Method enables decision-makers to decide based on existing ranking data and expert comparisons.
We show how these algorithms can be extended so that the experts do not need to compare all alternatives pairwise.
arXiv Detail & Related papers (2022-07-21T23:14:21Z) - Towards Better Understanding Attribution Methods [77.1487219861185]
Post-hoc attribution methods have been proposed to identify image regions most influential to the models' decisions.
We propose three novel evaluation schemes to more reliably measure the faithfulness of those methods.
We also propose a post-processing smoothing step that significantly improves the performance of some attribution methods.
arXiv Detail & Related papers (2022-05-20T20:50:17Z) - Doubting AI Predictions: Influence-Driven Second Opinion Recommendation [92.30805227803688]
We propose a way to augment human-AI collaboration by building on a common organizational practice: identifying experts who are likely to provide complementary opinions.
The proposed approach aims to leverage productive disagreement by identifying whether some experts are likely to disagree with an algorithmic assessment.
arXiv Detail & Related papers (2022-04-29T20:35:07Z) - Leveraging Expert Consistency to Improve Algorithmic Decision Support [62.61153549123407]
We explore the use of historical expert decisions as a rich source of information that can be combined with observed outcomes to narrow the construct gap.
We propose an influence function-based methodology to estimate expert consistency indirectly when each case in the data is assessed by a single expert.
Our empirical evaluation, using simulations in a clinical setting and real-world data from the child welfare domain, indicates that the proposed approach successfully narrows the construct gap.
arXiv Detail & Related papers (2021-01-24T05:40:29Z) - A black-box adversarial attack for poisoning clustering [78.19784577498031]
We propose a black-box adversarial attack for crafting adversarial samples to test the robustness of clustering algorithms.
We show that our attacks are transferable even against supervised algorithms such as SVMs, random forests, and neural networks.
arXiv Detail & Related papers (2020-09-09T18:19:31Z) - Consistent Estimators for Learning to Defer to an Expert [5.076419064097734]
We show how to learn predictors that can either predict or choose to defer the decision to a downstream expert.
We show the effectiveness of our approach on a variety of experimental tasks.
arXiv Detail & Related papers (2020-06-02T18:21:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.