Capturing Human Cognitive Styles with Language: Towards an Experimental Evaluation Paradigm
- URL: http://arxiv.org/abs/2502.13326v1
- Date: Tue, 18 Feb 2025 23:08:15 GMT
- Title: Capturing Human Cognitive Styles with Language: Towards an Experimental Evaluation Paradigm
- Authors: Vasudha Varadarajan, Syeda Mahwish, Xiaoran Liu, Julia Buffolino, Christian C. Luhmann, Ryan L. Boyd, H. Andrew Schwartz,
- Abstract summary: We introduce an experiment-based framework for evaluating language-based cognitive style models against human behavior.<n>We find that language features, intended to capture cognitive style, can predict participants' decision style with moderate-to-high accuracy.
- Score: 8.479236801214816
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While NLP models often seek to capture cognitive states via language, the validity of predicted states is determined by comparing them to annotations created without access the cognitive states of the authors. In behavioral sciences, cognitive states are instead measured via experiments. Here, we introduce an experiment-based framework for evaluating language-based cognitive style models against human behavior. We explore the phenomenon of decision making, and its relationship to the linguistic style of an individual talking about a recent decision they made. The participants then follow a classical decision-making experiment that captures their cognitive style, determined by how preferences change during a decision exercise. We find that language features, intended to capture cognitive style, can predict participants' decision style with moderate-to-high accuracy (AUC ~ 0.8), demonstrating that cognitive style can be partly captured and revealed by discourse patterns.
Related papers
- Implicit In-Context Learning: Evidence from Artificial Language Experiments [0.0]
Humans acquire language through implicit learning, absorbing complex patterns without explicit awareness.
We adapted three classic artificial language learning experiments spanning morphology, morphosyntax, and syntax to evaluate implicit learning at inferencing level.
Our results reveal linguistic domain-specific alignment between models and human behaviors.
arXiv Detail & Related papers (2025-03-31T15:07:08Z) - Cross-lingual Speech Emotion Recognition: Humans vs. Self-Supervised Models [16.0617753653454]
This study presents a comparative analysis between human performance and SSL models.
We also compare the SER ability of models and humans at both utterance- and segment-levels.
Our findings reveal that models, with appropriate knowledge transfer, can adapt to the target language and achieve performance comparable to native speakers.
arXiv Detail & Related papers (2024-09-25T13:27:17Z) - Learning Phonotactics from Linguistic Informants [54.086544221761486]
Our model iteratively selects or synthesizes a data-point according to one of a range of information-theoretic policies.
We find that the information-theoretic policies that our model uses to select items to query the informant achieve sample efficiency comparable to, or greater than, fully supervised approaches.
arXiv Detail & Related papers (2024-05-08T00:18:56Z) - Holmes: A Benchmark to Assess the Linguistic Competence of Language Models [59.627729608055006]
We introduce Holmes, a new benchmark designed to assess language models (LMs) linguistic competence.
We use computation-based probing to examine LMs' internal representations regarding distinct linguistic phenomena.
As a result, we meet recent calls to disentangle LMs' linguistic competence from other cognitive abilities.
arXiv Detail & Related papers (2024-04-29T17:58:36Z) - Toward best research practices in AI Psychology [3.8073142980733]
Language models have become an essential part of the burgeoning field of AI Psychology.
I discuss 14 methodological considerations that can help design more robust, generalizable studies evaluating the cognitive abilities of language-based AI systems.
arXiv Detail & Related papers (2023-12-03T04:28:19Z) - Personality Style Recognition via Machine Learning: Identifying
Anaclitic and Introjective Personality Styles from Patients' Speech [6.3042597209752715]
We use natural language processing (NLP) and machine learning tools for classification.
We test this on a dataset of recorded clinical diagnostic interviews (CDI) on a sample of 79 patients diagnosed with major depressive disorder (MDD)
We find that automated classification with language-derived features (i.e., based on LIWC) significantly outperforms questionnaire-based classification models.
arXiv Detail & Related papers (2023-11-07T15:56:19Z) - Using Artificial Populations to Study Psychological Phenomena in Neural
Models [0.0]
Investigation of cognitive behavior in language models must be conducted in an appropriate population for the results to be meaningful.
We leverage work in uncertainty estimation in a novel approach to efficiently construct experimental populations.
We provide theoretical grounding in the uncertainty estimation literature and motivation from current cognitive work regarding language models.
arXiv Detail & Related papers (2023-08-15T20:47:51Z) - Words That Stick: Predicting Decision Making and Synonym Engagement
Using Cognitive Biases and Computational Linguistics [3.09766013093045]
This research draws upon cognitive psychology and information systems studies to anticipate user engagement and decision-making on digital platforms.
Our methodology synthesizes four cognitive biasesRepresentativeness, Ease-of-use, Affect, and Distributioninto the READ model.
Through a comprehensive user survey, we assess the model's ability to predict user engagement, discovering that synonyms that accurately represent core ideas, are easy to understand, elicit emotional responses, and are commonly encountered, promote greater user engagement.
arXiv Detail & Related papers (2023-07-26T21:20:03Z) - Perception Point: Identifying Critical Learning Periods in Speech for
Bilingual Networks [58.24134321728942]
We compare and identify cognitive aspects on deep neural-based visual lip-reading models.
We observe a strong correlation between these theories in cognitive psychology and our unique modeling.
arXiv Detail & Related papers (2021-10-13T05:30:50Z) - CogAlign: Learning to Align Textual Neural Representations to Cognitive
Language Processing Signals [60.921888445317705]
We propose a CogAlign approach to integrate cognitive language processing signals into natural language processing models.
We show that CogAlign achieves significant improvements with multiple cognitive features over state-of-the-art models on public datasets.
arXiv Detail & Related papers (2021-06-10T07:10:25Z) - Empowering Language Understanding with Counterfactual Reasoning [141.48592718583245]
We propose a Counterfactual Reasoning Model, which mimics the counterfactual thinking by learning from few counterfactual samples.
In particular, we devise a generation module to generate representative counterfactual samples for each factual sample, and a retrospective module to retrospect the model prediction by comparing the counterfactual and factual samples.
arXiv Detail & Related papers (2021-06-06T06:36:52Z) - Knowledge-Grounded Dialogue Generation with Pre-trained Language Models [74.09352261943911]
We study knowledge-grounded dialogue generation with pre-trained language models.
We propose equipping response generation defined by a pre-trained language model with a knowledge selection module.
arXiv Detail & Related papers (2020-10-17T16:49:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.