Analyzing Effect of Repeated Reading on Oral Fluency and Narrative
Production for Computer-Assisted Language Learning
- URL: http://arxiv.org/abs/2006.14320v1
- Date: Thu, 25 Jun 2020 11:51:08 GMT
- Title: Analyzing Effect of Repeated Reading on Oral Fluency and Narrative
Production for Computer-Assisted Language Learning
- Authors: Santosh Kumar Barnwal, Uma Shanker Tiwary
- Abstract summary: Repeated reading (RR) helps learners gain confidence, speed and process words automatically.
There are no open audio datasets available on oral responses of learners based on RR practices.
We propose a method to assess oral fluency and narrative production for learners of English using acoustic, prosodic, lexical and syntactical characteristics.
- Score: 2.28438857884398
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Repeated reading (RR) helps learners, who have little to no experience with
reading fluently to gain confidence, speed and process words automatically. The
benefits of repeated readings include helping all learners with fact recall,
aiding identification of learners' main ideas and vocabulary, increasing
comprehension, leading to faster reading as well as increasing word recognition
accuracy, and assisting struggling learners as they transition from
word-by-word reading to more meaningful phrasing. Thus, RR ultimately helps in
improvements of learners' oral fluency and narrative production. However, there
are no open audio datasets available on oral responses of learners based on
their RR practices. Therefore, in this paper, we present our dataset, discuss
its properties, and propose a method to assess oral fluency and narrative
production for learners of English using acoustic, prosodic, lexical and
syntactical characteristics. The results show that a CALL system can be
developed for assessing the improvements in learners' oral fluency and
narrative production.
Related papers
- Deep Learning for Assessment of Oral Reading Fluency [5.707725771108279]
This work investigates end-to-end modeling on a training dataset of children's audio recordings of story texts labeled by human experts.
We report the performance of a number of system variations on the relevant measures, and probe the learned embeddings for lexical and acoustic-prosodic features known to be important to the perception of reading fluency.
arXiv Detail & Related papers (2024-05-29T18:09:35Z) - In-Memory Learning: A Declarative Learning Framework for Large Language
Models [56.62616975119192]
We propose a novel learning framework that allows agents to align with their environment without relying on human-labeled data.
This entire process transpires within the memory components and is implemented through natural language.
We demonstrate the effectiveness of our framework and provide insights into this problem.
arXiv Detail & Related papers (2024-03-05T08:25:11Z) - Gaze-Driven Sentence Simplification for Language Learners: Enhancing
Comprehension and Readability [11.50011780498048]
This paper presents a novel gaze-driven sentence simplification system designed to enhance reading comprehension.
Our system incorporates machine learning models tailored to individual learners, combining eye gaze features and linguistic features to assess sentence comprehension.
arXiv Detail & Related papers (2023-09-30T12:18:31Z) - Storyfier: Exploring Vocabulary Learning Support with Text Generation
Models [52.58844741797822]
We develop Storyfier to provide a coherent context for any target words of learners' interests.
learners generally favor the generated stories for connecting target words and writing assistance for easing their learning workload.
In read-cloze-write learning sessions, participants using Storyfier perform worse in recalling and using target words than learning with a baseline tool without our AI features.
arXiv Detail & Related papers (2023-08-07T18:25:00Z) - Context-faithful Prompting for Large Language Models [51.194410884263135]
Large language models (LLMs) encode parametric knowledge about world facts.
Their reliance on parametric knowledge may cause them to overlook contextual cues, leading to incorrect predictions in context-sensitive NLP tasks.
We assess and enhance LLMs' contextual faithfulness in two aspects: knowledge conflict and prediction with abstention.
arXiv Detail & Related papers (2023-03-20T17:54:58Z) - Decoupling Knowledge from Memorization: Retrieval-augmented Prompt
Learning [113.58691755215663]
We develop RetroPrompt to help a model strike a balance between generalization and memorization.
In contrast with vanilla prompt learning, RetroPrompt constructs an open-book knowledge-store from training instances.
Extensive experiments demonstrate that RetroPrompt can obtain better performance in both few-shot and zero-shot settings.
arXiv Detail & Related papers (2022-05-29T16:07:30Z) - Learning to Mediate Disparities Towards Pragmatic Communication [9.321336642983875]
We propose Pragmatic Rational Speaker (PRS) as a framework for building AI agents with similar abilities in language communication.
The PRS attempts to learn the speaker-listener disparity and adjust the speech accordingly, by adding a light-weighted disparity adjustment layer into working memory.
By fixing the long-term memory, the PRS only needs to update its working memory to learn and adapt to different types of listeners.
arXiv Detail & Related papers (2022-03-25T14:46:43Z) - Counterfactual Memorization in Neural Language Models [91.8747020391287]
Modern neural language models that are widely used in various NLP tasks risk memorizing sensitive information from their training data.
An open question in previous studies of language model memorization is how to filter out "common" memorization.
We formulate a notion of counterfactual memorization which characterizes how a model's predictions change if a particular document is omitted during training.
arXiv Detail & Related papers (2021-12-24T04:20:57Z) - A Framework for Learning Assessment through Multimodal Analysis of
Reading Behaviour and Language Comprehension [0.0]
This dissertation shows how different skills could be measured and scored automatically.
We also demonstrate, using example experiments on multiple forms of learners' responses, how frequent reading practices could impact on the variables of multimodal skills.
arXiv Detail & Related papers (2021-10-22T17:48:03Z) - Deep Learning for Prominence Detection in Children's Read Speech [13.041607703862724]
We consider a labeled dataset of children's reading recordings for the speaker-independent detection of prominent words.
A previous well-tuned random forest ensemble predictor is replaced by an RNN sequence to exploit potential context dependency.
Deep learning is applied to obtain word-level features from low-level acoustic contours of fundamental frequency, intensity and spectral shape.
arXiv Detail & Related papers (2021-04-12T14:15:08Z) - UniSpeech: Unified Speech Representation Learning with Labeled and
Unlabeled Data [54.733889961024445]
We propose a unified pre-training approach called UniSpeech to learn speech representations with both unlabeled and labeled data.
We evaluate the effectiveness of UniSpeech for cross-lingual representation learning on public CommonVoice corpus.
arXiv Detail & Related papers (2021-01-19T12:53:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.