An Interactive Foreign Language Trainer Using Assessment and Feedback
Modalities
- URL: http://arxiv.org/abs/2011.11525v1
- Date: Mon, 23 Nov 2020 16:35:59 GMT
- Title: An Interactive Foreign Language Trainer Using Assessment and Feedback
Modalities
- Authors: Rosalyn P. Reyes, Evelyn C. Samson, Julius G. Garcia
- Abstract summary: This study is designed to help the students learn from one or all of the four most commonly used foreign languages in the field of Information Technology.
The program is intended to quickly teach the students in the form of basic, intermediate, and advanced levels.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: English has long been set as the universal language. Basically most, if not
all countries in the world know how to speak English or at least try to use it
in their everyday communications for the purpose of globalizing. This study is
designed to help the students learn from one or all of the four most commonly
used foreign languages in the field of Information Technology namely Korean,
Mandarin Chinese, Japanese, and Spanish. Composed of a set of words, phrases,
and sentences, the program is intended to quickly teach the students in the
form of basic, intermediate, and advanced levels. This study has used the Agile
model in system development. Functionality, reliability, usability, efficiency,
and portability were also considered in determining the level of the
acceptability of the system in terms of ISO 25010:2011. This interactive
foreign language trainer is built to associate fun with learning, to remedy the
lack of perseverance by some in learning a new language, and to make learning
the users' favorite playtime activity. The study allows the user to interact
with the program which provides support for their learning. Moreover, this
study reveals that integrating feedback modalities in the training and
assessment modules of the software strengthens and enhances the memory in
learning the language.
Related papers
- Teaching Embodied Reinforcement Learning Agents: Informativeness and Diversity of Language Use [16.425032085699698]
It is desirable for embodied agents to have the ability to leverage human language to gain explicit or implicit knowledge for learning tasks.
It's not clear how to incorporate rich language use to facilitate task learning.
This paper studies different types of language inputs in facilitating reinforcement learning.
arXiv Detail & Related papers (2024-10-31T17:59:52Z) - A Transformer-Based Multi-Stream Approach for Isolated Iranian Sign Language Recognition [0.0]
This research aims to recognize Iranian Sign Language words with the help of the latest deep learning tools such as transformers.
The dataset used includes 101 Iranian Sign Language words frequently used in academic environments such as universities.
arXiv Detail & Related papers (2024-06-27T06:54:25Z) - Teacher Perception of Automatically Extracted Grammar Concepts for L2
Language Learning [66.79173000135717]
We apply this work to teaching two Indian languages, Kannada and Marathi, which do not have well-developed resources for second language learning.
We extract descriptions from a natural text corpus that answer questions about morphosyntax (learning of word order, agreement, case marking, or word formation) and semantics (learning of vocabulary).
We enlist the help of language educators from schools in North America to perform a manual evaluation, who find the materials have potential to be used for their lesson preparation and learner evaluation.
arXiv Detail & Related papers (2023-10-27T18:17:29Z) - Prototype of a robotic system to assist the learning process of English
language with text-generation through DNN [0.0]
We present a working prototype of a humanoid robotic system to assist English language self-learners.
The learners interact with the system using a Graphic User Interface that generates text according to the English level of the user.
arXiv Detail & Related papers (2023-09-20T08:39:51Z) - Large Language Models for Difficulty Estimation of Foreign Language
Content with Application to Language Learning [1.4392208044851977]
We use large language models to aid learners enhance proficiency in a foreign language.
Our work centers on French content, but our approach is readily transferable to other languages.
arXiv Detail & Related papers (2023-09-10T21:23:09Z) - Learning to Model the World with Language [100.76069091703505]
To interact with humans and act in the world, agents need to understand the range of language that people use and relate it to the visual world.
Our key idea is that agents should interpret such diverse language as a signal that helps them predict the future.
We instantiate this in Dynalang, an agent that learns a multimodal world model to predict future text and image representations.
arXiv Detail & Related papers (2023-07-31T17:57:49Z) - User Adaptive Language Learning Chatbots with a Curriculum [55.63893493019025]
We adapt lexically constrained decoding to a dialog system, which urges the dialog system to include curriculum-aligned words and phrases in its generated utterances.
The evaluation result demonstrates that the dialog system with curriculum infusion improves students' understanding of target words and increases their interest in practicing English.
arXiv Detail & Related papers (2023-04-11T20:41:41Z) - Using Chatbots to Teach Languages [43.866863322607216]
Our system can adapt to users' language proficiency on the fly.
We provide automatic grammar error feedback to help users learn from their mistakes.
Our next step is to make our system more adaptive to user profile information by using reinforcement learning algorithms.
arXiv Detail & Related papers (2022-07-31T07:01:35Z) - Towards Lifelong Learning of Multilingual Text-To-Speech Synthesis [87.75833205560406]
This work presents a lifelong learning approach to train a multilingual Text-To-Speech (TTS) system.
It does not require pooled data from all languages altogether, and thus alleviates the storage and computation burden.
arXiv Detail & Related papers (2021-10-09T07:00:38Z) - Meta-Transfer Learning for Code-Switched Speech Recognition [72.84247387728999]
We propose a new learning method, meta-transfer learning, to transfer learn on a code-switched speech recognition system in a low-resource setting.
Our model learns to recognize individual languages, and transfer them so as to better recognize mixed-language speech by conditioning the optimization on the code-switching data.
arXiv Detail & Related papers (2020-04-29T14:27:19Z) - Experience Grounds Language [185.73483760454454]
Language understanding research is held back by a failure to relate language to the physical world it describes and to the social interactions it facilitates.
Despite the incredible effectiveness of language processing models to tackle tasks after being trained on text alone, successful linguistic communication relies on a shared experience of the world.
arXiv Detail & Related papers (2020-04-21T16:56:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.