How trial-to-trial learning shapes mappings in the mental lexicon:
Modelling Lexical Decision with Linear Discriminative Learning
- URL: http://arxiv.org/abs/2207.00430v3
- Date: Mon, 4 Sep 2023 11:45:12 GMT
- Title: How trial-to-trial learning shapes mappings in the mental lexicon:
Modelling Lexical Decision with Linear Discriminative Learning
- Authors: Maria Heitmeier, Yu-Ying Chuang and R. Harald Baayen
- Abstract summary: This study investigates whether trial-to-trial learning can be detected in an unprimed lexical decision experiment.
We used the Discriminative Lexicon Model (DLM), a model of the mental lexicon with meaning representations from distributional semantics.
Our results support the possibility that our lexical knowledge is subject to continuous changes.
- Score: 0.4450536872346657
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Trial-to-trial effects have been found in a number of studies, indicating
that processing a stimulus influences responses in subsequent trials. A special
case are priming effects which have been modelled successfully with
error-driven learning (Marsolek, 2008), implying that participants are
continuously learning during experiments. This study investigates whether
trial-to-trial learning can be detected in an unprimed lexical decision
experiment. We used the Discriminative Lexicon Model (DLM; Baayen et al.,
2019), a model of the mental lexicon with meaning representations from
distributional semantics, which models error-driven incremental learning with
the Widrow-Hoff rule. We used data from the British Lexicon Project (BLP;
Keuleers et al., 2012) and simulated the lexical decision experiment with the
DLM on a trial-by-trial basis for each subject individually. Then, reaction
times were predicted with Generalised Additive Models (GAMs), using measures
derived from the DLM simulations as predictors. We extracted measures from two
simulations per subject (one with learning updates between trials and one
without), and used them as input to two GAMs. Learning-based models showed
better model fit than the non-learning ones for the majority of subjects. Our
measures also provide insights into lexical processing and individual
differences. This demonstrates the potential of the DLM to model behavioural
data and leads to the conclusion that trial-to-trial learning can indeed be
detected in unprimed lexical decision. Our results support the possibility that
our lexical knowledge is subject to continuous changes.
Related papers
- Model-free Methods for Event History Analysis and Efficient Adjustment (PhD Thesis) [55.2480439325792]
This thesis is a series of independent contributions to statistics unified by a model-free perspective.
The first chapter elaborates on how a model-free perspective can be used to formulate flexible methods that leverage prediction techniques from machine learning.
The second chapter studies the concept of local independence, which describes whether the evolution of one process is directly influenced by another.
arXiv Detail & Related papers (2025-02-11T19:24:09Z) - CM-DQN: A Value-Based Deep Reinforcement Learning Model to Simulate Confirmation Bias [0.0]
We propose a new algorithm in Deep Reinforcement Learning, CM-DQN, to simulate the human decision-making process.
We test in Lunar Lander environment with confirmatory, disconfirmatory bias and non-biased to observe the learning effects.
arXiv Detail & Related papers (2024-07-10T08:16:13Z) - Large Language Models are Biased Reinforcement Learners [0.0]
We show that large language models (LLMs) exhibit behavioral signatures of a relative value bias.
Computational cognitive modeling reveals that LLM behavior is well-described by a simple RL algorithm.
arXiv Detail & Related papers (2024-05-19T01:43:52Z) - CausalGym: Benchmarking causal interpretability methods on linguistic
tasks [52.61917615039112]
We use CausalGym to benchmark the ability of interpretability methods to causally affect model behaviour.
We study the pythia models (14M--6.9B) and assess the causal efficacy of a wide range of interpretability methods.
We find that DAS outperforms the other methods, and so we use it to study the learning trajectory of two difficult linguistic phenomena.
arXiv Detail & Related papers (2024-02-19T21:35:56Z) - Uncertainty Quantification for In-Context Learning of Large Language Models [52.891205009620364]
In-context learning has emerged as a groundbreaking ability of Large Language Models (LLMs)
We propose a novel formulation and corresponding estimation method to quantify both types of uncertainties.
The proposed method offers an unsupervised way to understand the prediction of in-context learning in a plug-and-play fashion.
arXiv Detail & Related papers (2024-02-15T18:46:24Z) - Interpretable Imitation Learning with Dynamic Causal Relations [65.18456572421702]
We propose to expose captured knowledge in the form of a directed acyclic causal graph.
We also design this causal discovery process to be state-dependent, enabling it to model the dynamics in latent causal graphs.
The proposed framework is composed of three parts: a dynamic causal discovery module, a causality encoding module, and a prediction module, and is trained in an end-to-end manner.
arXiv Detail & Related papers (2023-09-30T20:59:42Z) - Sources of Hallucination by Large Language Models on Inference Tasks [16.644096408742325]
Large Language Models (LLMs) are claimed to be capable of Natural Language Inference (NLI)
We present a series of behavioral studies on several LLM families which probe their behavior using controlled experiments.
arXiv Detail & Related papers (2023-05-23T22:24:44Z) - Explain, Edit, and Understand: Rethinking User Study Design for
Evaluating Model Explanations [97.91630330328815]
We conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews.
We observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
arXiv Detail & Related papers (2021-12-17T18:29:56Z) - Interpretable Machine Learning Classifiers for Brain Tumour Survival
Prediction [0.45880283710344055]
We use a novel brain tumour dataset to compare two interpretable rule list models against popular machine learning approaches for brain tumour survival prediction.
We demonstrate that rule list algorithms produced simple decision lists that align with clinical expertise.
arXiv Detail & Related papers (2021-06-17T12:17:10Z) - Empowering Language Understanding with Counterfactual Reasoning [141.48592718583245]
We propose a Counterfactual Reasoning Model, which mimics the counterfactual thinking by learning from few counterfactual samples.
In particular, we devise a generation module to generate representative counterfactual samples for each factual sample, and a retrospective module to retrospect the model prediction by comparing the counterfactual and factual samples.
arXiv Detail & Related papers (2021-06-06T06:36:52Z) - A framework for predicting, interpreting, and improving Learning
Outcomes [0.0]
We develop an Embibe Score Quotient model (ESQ) to predict test scores based on observed academic, behavioral and test-taking features of a student.
ESQ can be used to predict the future scoring potential of a student as well as offer personalized learning nudges.
arXiv Detail & Related papers (2020-10-06T11:22:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.