Decisions and Performance Under Bounded Rationality: A Computational
Benchmarking Approach
- URL: http://arxiv.org/abs/2005.12638v2
- Date: Wed, 2 Dec 2020 15:50:58 GMT
- Title: Decisions and Performance Under Bounded Rationality: A Computational
Benchmarking Approach
- Authors: Dainis Zegners, Uwe Sunde, Anthony Strittmatter
- Abstract summary: This paper presents a novel approach to analyze human decision-making.
It involves comparing the behavior of professional chess players relative to a computational benchmark of cognitively bounded rationality.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a novel approach to analyze human decision-making that
involves comparing the behavior of professional chess players relative to a
computational benchmark of cognitively bounded rationality. This benchmark is
constructed using algorithms of modern chess engines and allows investigating
behavior at the level of individual move-by-move observations, thus
representing a natural benchmark for computationally bounded optimization. The
analysis delivers novel insights by isolating deviations from this benchmark of
bounded rationality as well as their causes and consequences for performance.
The findings document the existence of several distinct dimensions of
behavioral deviations, which are related to asymmetric positional evaluation in
terms of losses and gains, time pressure, fatigue, and complexity. The results
also document that deviations from the benchmark do not necessarily entail
worse performance. Faster decisions are associated with more frequent
deviations from the benchmark, yet they are also associated with better
performance. The findings are consistent with an important influence of
intuition and experience, thereby shedding new light on the recent debate about
computational rationality in cognitive processes.
Related papers
- Bridging Internal Probability and Self-Consistency for Effective and Efficient LLM Reasoning [53.25336975467293]
We present the first theoretical error decomposition analysis of methods such as perplexity and self-consistency.
Our analysis reveals a fundamental trade-off: perplexity methods suffer from substantial model error due to the absence of a proper consistency function.
We propose Reasoning-Pruning Perplexity Consistency (RPC), which integrates perplexity with self-consistency, and Reasoning Pruning, which eliminates low-probability reasoning paths.
arXiv Detail & Related papers (2025-02-01T18:09:49Z) - Switchable Decision: Dynamic Neural Generation Networks [98.61113699324429]
We propose a switchable decision to accelerate inference by dynamically assigning resources for each data instance.
Our method benefits from less cost during inference while keeping the same accuracy.
arXiv Detail & Related papers (2024-05-07T17:44:54Z) - Uncertainty in GNN Learning Evaluations: A Comparison Between Measures
for Quantifying Randomness in GNN Community Detection [4.358468367889626]
Real-world benchmarks are perplexing due to the multitude of decisions influencing GNN evaluations.
$W$ Randomness coefficient, based on the Wasserstein distance, is identified as providing the most robust assessment of randomness.
arXiv Detail & Related papers (2023-12-14T15:06:29Z) - Efficient Benchmarking of Language Models [22.696230279151166]
We present the problem of Efficient Benchmarking, namely, intelligently reducing the costs of LM evaluation without compromising reliability.
Using the HELM benchmark as a test case, we investigate how different benchmark design choices affect the computation-reliability trade-off.
We propose an evaluation algorithm, that, when applied to the HELM benchmark, leads to dramatic cost savings with minimal loss of benchmark reliability.
arXiv Detail & Related papers (2023-08-22T17:59:30Z) - Are metaheuristics worth it? A computational comparison between
nature-inspired and deterministic techniques on black-box optimization
problems [0.0]
In this paper, we provide an extensive computational comparison of selected methods from each of these branches.
The results showed that, when dealing with situations where the objective function evaluations are relatively cheap, the nature-inspired methods have a significantly better performance than their deterministic counterparts.
arXiv Detail & Related papers (2022-12-13T19:44:24Z) - Multi-Target Decision Making under Conditions of Severe Uncertainty [0.0]
We show how incomplete preferential and probabilistic information can be exploited to compare decisions among different targets.
We discuss some interesting properties of the proposed orders between decision options and show how they can be concretely computed by linear optimization.
We conclude the paper by demonstrating our framework in the context of comparing algorithms under different performance measures.
arXiv Detail & Related papers (2022-12-13T11:47:02Z) - Relational Surrogate Loss Learning [41.61184221367546]
This paper revisits the surrogate loss learning, where a deep neural network is employed to approximate the evaluation metrics.
In this paper, we show that directly maintaining the relation of models between surrogate losses and metrics suffices.
Our method is much easier to optimize and enjoys significant efficiency and performance gains.
arXiv Detail & Related papers (2022-02-26T17:32:57Z) - Doing Great at Estimating CATE? On the Neglected Assumptions in
Benchmark Comparisons of Treatment Effect Estimators [91.3755431537592]
We show that even in arguably the simplest setting, estimation under ignorability assumptions can be misleading.
We consider two popular machine learning benchmark datasets for evaluation of heterogeneous treatment effect estimators.
We highlight that the inherent characteristics of the benchmark datasets favor some algorithms over others.
arXiv Detail & Related papers (2021-07-28T13:21:27Z) - Performance Evaluation of Adversarial Attacks: Discrepancies and
Solutions [51.8695223602729]
adversarial attack methods have been developed to challenge the robustness of machine learning models.
We propose a Piece-wise Sampling Curving (PSC) toolkit to effectively address the discrepancy.
PSC toolkit offers options for balancing the computational cost and evaluation effectiveness.
arXiv Detail & Related papers (2021-04-22T14:36:51Z) - Loss Bounds for Approximate Influence-Based Abstraction [81.13024471616417]
Influence-based abstraction aims to gain leverage by modeling local subproblems together with the 'influence' that the rest of the system exerts on them.
This paper investigates the performance of such approaches from a theoretical perspective.
We show that neural networks trained with cross entropy are well suited to learn approximate influence representations.
arXiv Detail & Related papers (2020-11-03T15:33:10Z) - Pseudo-Convolutional Policy Gradient for Sequence-to-Sequence
Lip-Reading [96.48553941812366]
Lip-reading aims to infer the speech content from the lip movement sequence.
Traditional learning process of seq2seq models suffers from two problems.
We propose a novel pseudo-convolutional policy gradient (PCPG) based method to address these two problems.
arXiv Detail & Related papers (2020-03-09T09:12:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.