Modeling Human Behavior Part II -- Cognitive approaches and Uncertainty
- URL: http://arxiv.org/abs/2205.06483v1
- Date: Fri, 13 May 2022 07:29:15 GMT
- Title: Modeling Human Behavior Part II -- Cognitive approaches and Uncertainty
- Authors: Andrew Fuchs and Andrea Passarella and Marco Conti
- Abstract summary: In Part I, we discussed methods which generate a model of behavior from exploration of the system and feedback based on the exhibited behavior.
In this work, we will continue the discussion from the perspective of methods which focus on the assumed cognitive abilities, limitations, and biases demonstrated in human reasoning.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As we discussed in Part I of this topic, there is a clear desire to model and
comprehend human behavior. Given the popular presupposition of human reasoning
as the standard for learning and decision-making, there have been significant
efforts and a growing trend in research to replicate these innate human
abilities in artificial systems. In Part I, we discussed learning methods which
generate a model of behavior from exploration of the system and feedback based
on the exhibited behavior as well as topics relating to the use of or
accounting for beliefs with respect to applicable skills or mental states of
others. In this work, we will continue the discussion from the perspective of
methods which focus on the assumed cognitive abilities, limitations, and biases
demonstrated in human reasoning. We will arrange these topics as follows (i)
methods such as cognitive architectures, cognitive heuristics, and related
which demonstrate assumptions of limitations on cognitive resources and how
that impacts decisions and (ii) methods which generate and utilize
representations of bias or uncertainty to model human decision-making or the
future outcomes of decisions.
Related papers
- Mimicking Human Intuition: Cognitive Belief-Driven Q-Learning [5.960184723807347]
We propose Cognitive Belief-Driven Q-Learning (CBDQ), which integrates subjective belief modeling into the Q-learning framework.
CBDQ enhances decision-making accuracy by endowing agents with human-like learning and reasoning capabilities.
We evaluate the proposed method on discrete control benchmark tasks in various complicate environments.
arXiv Detail & Related papers (2024-10-02T16:50:29Z) - ConSiDERS-The-Human Evaluation Framework: Rethinking Human Evaluation for Generative Large Language Models [53.00812898384698]
We argue that human evaluation of generative large language models (LLMs) should be a multidisciplinary undertaking.
We highlight how cognitive biases can conflate fluent information and truthfulness, and how cognitive uncertainty affects the reliability of rating scores such as Likert.
We propose the ConSiDERS-The-Human evaluation framework consisting of 6 pillars -- Consistency, Scoring Criteria, Differentiating, User Experience, Responsible, and Scalability.
arXiv Detail & Related papers (2024-05-28T22:45:28Z) - Benchmarking Continual Learning from Cognitive Perspectives [14.867136605254975]
Continual learning addresses the problem of continuously acquiring and transferring knowledge without catastrophic forgetting of old concepts.
There is a mismatch between cognitive properties and evaluation methods of continual learning models.
We propose to integrate model cognitive capacities and evaluation metrics into a unified evaluation paradigm.
arXiv Detail & Related papers (2023-12-06T06:27:27Z) - Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - Modeling Human Behavior Part I -- Learning and Belief Approaches [0.0]
We focus on techniques which learn a model or policy of behavior through exploration and feedback.
Next generation autonomous and adaptive systems will largely include AI agents and humans working together as teams.
arXiv Detail & Related papers (2022-05-13T07:33:49Z) - Machine Explanations and Human Understanding [31.047297225560566]
Explanations are hypothesized to improve human understanding of machine learning models.
empirical studies have found mixed and even negative results.
We show how human intuitions play a central role in enabling human understanding.
arXiv Detail & Related papers (2022-02-08T19:00:38Z) - Active Inference in Robotics and Artificial Agents: Survey and
Challenges [51.29077770446286]
We review the state-of-the-art theory and implementations of active inference for state-estimation, control, planning and learning.
We showcase relevant experiments that illustrate its potential in terms of adaptation, generalization and robustness.
arXiv Detail & Related papers (2021-12-03T12:10:26Z) - Cognitive science as a source of forward and inverse models of human
decisions for robotics and control [13.502912109138249]
We look at how cognitive science can provide forward models of human decision-making.
We highlight approaches that synthesize blackbox and theory-driven modeling.
We aim to provide readers with a glimpse of the range of frameworks, methodologies, and actionable insights that lie at the intersection of cognitive science and control research.
arXiv Detail & Related papers (2021-09-01T00:28:28Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.