Math anxiety and associative knowledge structure are entwined in psychology students but not in Large Language Models like GPT-3.5 and GPT-4o
- URL: http://arxiv.org/abs/2511.01558v1
- Date: Mon, 03 Nov 2025 13:25:11 GMT
- Title: Math anxiety and associative knowledge structure are entwined in psychology students but not in Large Language Models like GPT-3.5 and GPT-4o
- Authors: Luciana Ciringione, Emma Franchino, Simone Reigl, Isaia D'Onofrio, Anna Serbati, Oleksandra Poquet, Florence Gabriel, Massimo Stella,
- Abstract summary: This study employs a framework based on behavioural forma mentis networks to explore individual and group differences in the perception and association of concepts related to math and anxiety.<n>Experiments 1, 2, and 3 employ individual-level network features to predict psychometric scores for math anxiety.<n>Experiment 4 focuses on group-level perceptions extracted from human students, GPT-3.5 and GPT-4o's networks.
- Score: 10.71149623650681
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Math anxiety poses significant challenges for university psychology students, affecting their career choices and overall well-being. This study employs a framework based on behavioural forma mentis networks (i.e. cognitive models that map how individuals structure their associative knowledge and emotional perceptions of concepts) to explore individual and group differences in the perception and association of concepts related to math and anxiety. We conducted 4 experiments involving psychology undergraduates from 2 samples (n1 = 70, n2 = 57) compared against GPT-simulated students (GPT-3.5: n2 = 300; GPT-4o: n4 = 300). Experiments 1, 2, and 3 employ individual-level network features to predict psychometric scores for math anxiety and its facets (observational, social and evaluational) from the Math Anxiety Scale. Experiment 4 focuses on group-level perceptions extracted from human students, GPT-3.5 and GPT-4o's networks. Results indicate that, in students, positive valence ratings and higher network degree for "anxiety", together with negative ratings for "math", can predict higher total and evaluative math anxiety. In contrast, these models do not work on GPT-based data because of differences in simulated networks and psychometric scores compared to humans. These results were also reconciled with differences found in the ways that high/low subgroups of simulated and real students framed semantically and emotionally STEM concepts. High math-anxiety students collectively framed "anxiety" in an emotionally polarising way, absent in the negative perception of low math-anxiety students. "Science" was rated positively, but contrasted against the negative perception of "math". These findings underscore the importance of understanding concept perception and associations in managing students' math anxiety.
Related papers
- Cognitive networks reconstruct mindsets about STEM subjects and educational contexts in almost 1000 high-schoolers, University students and LLM-based digital twins [35.18016233072556]
We use cognitive network science to reconstruct group mindsets as behavioural forma mentis networks (BFMNs)<n>Across student groups, science and research are consistently framed positively, while their core quantitative subjects exhibit more negative and anxiety related auras.<n>Human networks show greater overlapping between mathematics and anxiety than GPT-oss.
arXiv Detail & Related papers (2026-02-16T13:49:21Z) - A network psychometric analysis of maths anxiety factors in Italian psychology students [0.0]
This study translated the 3-factor MAS-UK scale in Italian to produce a new tool, MAS-IT.<n>A sample of 324 Italian undergraduates completed the MAS-IT.<n>CFA results revealed that the original MAS-UK 3-factor model did not fit the Italian data.
arXiv Detail & Related papers (2025-03-03T14:11:16Z) - Cognitive networks highlight differences and similarities in the STEM mindsets of human and LLM-simulated trainees, experts and academics [0.0]
This study uses behavioural forma mentis networks to investigate the STEM-focused mindset.<n>Human forma mentis networks exhibited significantly higher clustering coefficients compared to GPT-3.5.<n>Human experts, in particular, demonstrated robust clustering coefficients, reflecting better integration of STEM concepts into their cognitive networks.
arXiv Detail & Related papers (2025-02-26T20:02:51Z) - Measuring Psychological Depth in Language Models [50.48914935872879]
We introduce the Psychological Depth Scale (PDS), a novel framework rooted in literary theory that measures an LLM's ability to produce authentic and narratively complex stories.
We empirically validate our framework by showing that humans can consistently evaluate stories based on PDS (0.72 Krippendorff's alpha)
Surprisingly, GPT-4 stories either surpassed or were statistically indistinguishable from highly-rated human-written stories sourced from Reddit.
arXiv Detail & Related papers (2024-06-18T14:51:54Z) - PsychoGAT: A Novel Psychological Measurement Paradigm through Interactive Fiction Games with LLM Agents [68.50571379012621]
Psychological measurement is essential for mental health, self-understanding, and personal development.
PsychoGAT (Psychological Game AgenTs) achieves statistically significant excellence in psychometric metrics such as reliability, convergent validity, and discriminant validity.
arXiv Detail & Related papers (2024-02-19T18:00:30Z) - Cognitive network science reveals bias in GPT-3, ChatGPT, and GPT-4
mirroring math anxiety in high-school students [0.3131740922192114]
We investigate perceptions of math and STEM fields provided by cutting-edge language models, namely GPT-3, Chat-GPT, and GPT-4.
Our findings indicate that LLMs have an overall negative perception of math and STEM fields, with math being perceived most negatively.
We observe that newer versions (i.e. GPT-4) produce richer, more complex perceptions as well as less negative perceptions compared to older versions and N=159 high-school students.
arXiv Detail & Related papers (2023-05-22T15:06:51Z) - Inducing anxiety in large language models can induce bias [47.85323153767388]
We focus on twelve established large language models (LLMs) and subject them to a questionnaire commonly used in psychiatry.
Our results show that six of the latest LLMs respond robustly to the anxiety questionnaire, producing comparable anxiety scores to humans.
Anxiety-induction not only influences LLMs' scores on an anxiety questionnaire but also influences their behavior in a previously-established benchmark measuring biases such as racism and ageism.
arXiv Detail & Related papers (2023-04-21T16:29:43Z) - Evaluating Psychological Safety of Large Language Models [72.88260608425949]
We designed unbiased prompts to evaluate the psychological safety of large language models (LLMs)
We tested five different LLMs by using two personality tests: Short Dark Triad (SD-3) and Big Five Inventory (BFI)
Despite being instruction fine-tuned with safety metrics to reduce toxicity, InstructGPT, GPT-3.5, and GPT-4 still showed dark personality patterns.
Fine-tuning Llama-2-chat-7B with responses from BFI using direct preference optimization could effectively reduce the psychological toxicity of the model.
arXiv Detail & Related papers (2022-12-20T18:45:07Z) - Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs [77.88043871260466]
We show that one of today's largest language models lacks this kind of social intelligence out-of-the box.
We conclude that person-centric NLP approaches might be more effective towards neural Theory of Mind.
arXiv Detail & Related papers (2022-10-24T14:58:58Z) - The world seems different in a social context: a neural network analysis
of human experimental data [57.729312306803955]
We show that it is possible to replicate human behavioral data in both individual and social task settings by modifying the precision of prior and sensory signals.
An analysis of the neural activation traces of the trained networks provides evidence that information is coded in fundamentally different ways in the network in the individual and in the social conditions.
arXiv Detail & Related papers (2022-03-03T17:19:12Z) - DASentimental: Detecting depression, anxiety and stress in texts via
emotional recall, cognitive networks and machine learning [0.0]
This project proposes a semi-supervised machine learning model (DASentimental) to extract depression, anxiety and stress from written text.
We train the model to spot how sequences of recalled emotion words by $N=200$ individuals correlated with responses to the Depression Anxiety Stress Scale (DASS-21)
We find that semantic distances between recalled emotions and the dyad "sad-happy" are crucial features for estimating depression levels but are less important for anxiety and stress.
arXiv Detail & Related papers (2021-10-26T13:58:46Z) - Network psychometrics and cognitive network science open new ways for
detecting, understanding and tackling the complexity of math anxiety: A
review [0.0]
Math anxiety is a clinical pathology impairing cognitive processing in math-related contexts.
It affects roughly 20% of students in 63 out of 64 worldwide educational systems but correlates weakly with academic performance.
It poses a concrete threat to students' well-being, computational literacy and career prospects in science.
arXiv Detail & Related papers (2021-08-31T12:43:43Z) - AGENT: A Benchmark for Core Psychological Reasoning [60.35621718321559]
Intuitive psychology is the ability to reason about hidden mental variables that drive observable actions.
Despite recent interest in machine agents that reason about other agents, it is not clear if such agents learn or hold the core psychology principles that drive human reasoning.
We present a benchmark consisting of procedurally generated 3D animations, AGENT, structured around four scenarios.
arXiv Detail & Related papers (2021-02-24T14:58:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.