A Framework for Learning Assessment through Multimodal Analysis of
Reading Behaviour and Language Comprehension
- URL: http://arxiv.org/abs/2110.11938v1
- Date: Fri, 22 Oct 2021 17:48:03 GMT
- Title: A Framework for Learning Assessment through Multimodal Analysis of
Reading Behaviour and Language Comprehension
- Authors: Santosh Kumar Barnwal
- Abstract summary: This dissertation shows how different skills could be measured and scored automatically.
We also demonstrate, using example experiments on multiple forms of learners' responses, how frequent reading practices could impact on the variables of multimodal skills.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Reading comprehension, which has been defined as gaining an understanding of
written text through a process of translating grapheme into meaning, is an
important academic skill. Other language learning skills - writing, speaking
and listening, all are connected to reading comprehension. There have been
several measures proposed by researchers to automate the assessment of
comprehension skills for second language (L2) learners, especially English as
Second Language (ESL) and English as Foreign Language (EFL) learners. However,
current methods measure particular skills without analysing the impact of
reading frequency on comprehension skills. In this dissertation, we show how
different skills could be measured and scored automatically. We also
demonstrate, using example experiments on multiple forms of learners'
responses, how frequent reading practices could impact on the variables of
multimodal skills (reading pattern, writing, and oral fluency).
This thesis comprises of five studies. The first and second studies are based
on eye-tracking data collected from EFL readers in repeated reading (RR)
sessions. The third and fourth studies are to evaluate free-text summary
written by EFL readers in repeated reading sessions. The fifth and last study,
described in the sixth chapter of the thesis, is to evaluate recorded oral
summaries recited by EFL readers in repeated reading sessions.
In a nutshell, through this dissertation, we show that multimodal skills of
learners could be assessed to measure their comprehension skills as well as to
measure the effect of repeated readings on these skills in the course of time,
by finding significant features and by applying machine learning techniques
with a combination of statistical models such as LMER.
Related papers
- Decomposed Prompting: Unveiling Multilingual Linguistic Structure
Knowledge in English-Centric Large Language Models [12.700783525558721]
English-centric Large Language Models (LLMs) like GPT-3 and LLaMA display a remarkable ability to perform multilingual tasks.
This paper introduces the decomposed prompting approach to probe the linguistic structure understanding of these LLMs in sequence labeling tasks.
arXiv Detail & Related papers (2024-02-28T15:15:39Z) - Subspace Chronicles: How Linguistic Information Emerges, Shifts and
Interacts during Language Model Training [56.74440457571821]
We analyze tasks covering syntax, semantics and reasoning, across 2M pre-training steps and five seeds.
We identify critical learning phases across tasks and time, during which subspaces emerge, share information, and later disentangle to specialize.
Our findings have implications for model interpretability, multi-task learning, and learning from limited data.
arXiv Detail & Related papers (2023-10-25T09:09:55Z) - Gaze-Driven Sentence Simplification for Language Learners: Enhancing
Comprehension and Readability [11.50011780498048]
This paper presents a novel gaze-driven sentence simplification system designed to enhance reading comprehension.
Our system incorporates machine learning models tailored to individual learners, combining eye gaze features and linguistic features to assess sentence comprehension.
arXiv Detail & Related papers (2023-09-30T12:18:31Z) - Spoken Language Intelligence of Large Language Models for Language
Learning [3.5924382852350902]
We focus on evaluating the efficacy of large language models (LLMs) in the realm of education.
We introduce a new multiple-choice question dataset to evaluate the effectiveness of LLMs in the aforementioned scenarios.
We also investigate the influence of various prompting techniques such as zero- and few-shot method.
We find that models of different sizes have good understanding of concepts in phonetics, phonology, and second language acquisition, but show limitations in reasoning for real-world problems.
arXiv Detail & Related papers (2023-08-28T12:47:41Z) - Context-faithful Prompting for Large Language Models [51.194410884263135]
Large language models (LLMs) encode parametric knowledge about world facts.
Their reliance on parametric knowledge may cause them to overlook contextual cues, leading to incorrect predictions in context-sensitive NLP tasks.
We assess and enhance LLMs' contextual faithfulness in two aspects: knowledge conflict and prediction with abstention.
arXiv Detail & Related papers (2023-03-20T17:54:58Z) - Unravelling Interlanguage Facts via Explainable Machine Learning [10.71581852108984]
We focus on the internals of an NLI classifier trained by an emphexplainable machine learning algorithm.
We use this perspective in order to tackle both NLI and a companion task, guessing whether a text has been written by a native or a non-native speaker.
We investigate which kind of linguistic traits are most effective for solving our two tasks, namely, are most indicative of a speaker's L1.
arXiv Detail & Related papers (2022-08-02T14:05:15Z) - Exposing Cross-Lingual Lexical Knowledge from Multilingual Sentence
Encoders [85.80950708769923]
We probe multilingual language models for the amount of cross-lingual lexical knowledge stored in their parameters, and compare them against the original multilingual LMs.
We also devise a novel method to expose this knowledge by additionally fine-tuning multilingual models.
We report substantial gains on standard benchmarks.
arXiv Detail & Related papers (2022-04-30T13:23:16Z) - Syllabic Quantity Patterns as Rhythmic Features for Latin Authorship
Attribution [74.27826764855911]
We employ syllabic quantity as a base for deriving rhythmic features for the task of computational authorship attribution of Latin prose texts.
Our experiments, carried out on three different datasets, using two different machine learning methods, show that rhythmic features based on syllabic quantity are beneficial in discriminating among Latin prose authors.
arXiv Detail & Related papers (2021-10-27T06:25:31Z) - Probing Pretrained Language Models for Lexical Semantics [76.73599166020307]
We present a systematic empirical analysis across six typologically diverse languages and five different lexical tasks.
Our results indicate patterns and best practices that hold universally, but also point to prominent variations across languages and tasks.
arXiv Detail & Related papers (2020-10-12T14:24:01Z) - Analyzing Effect of Repeated Reading on Oral Fluency and Narrative
Production for Computer-Assisted Language Learning [2.28438857884398]
Repeated reading (RR) helps learners gain confidence, speed and process words automatically.
There are no open audio datasets available on oral responses of learners based on RR practices.
We propose a method to assess oral fluency and narrative production for learners of English using acoustic, prosodic, lexical and syntactical characteristics.
arXiv Detail & Related papers (2020-06-25T11:51:08Z) - ORB: An Open Reading Benchmark for Comprehensive Evaluation of Machine
Reading Comprehension [53.037401638264235]
We present an evaluation server, ORB, that reports performance on seven diverse reading comprehension datasets.
The evaluation server places no restrictions on how models are trained, so it is a suitable test bed for exploring training paradigms and representation learning.
arXiv Detail & Related papers (2019-12-29T07:27:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.