AI-based Arabic Language and Speech Tutor
- URL: http://arxiv.org/abs/2210.12346v1
- Date: Sat, 22 Oct 2022 04:22:16 GMT
- Title: AI-based Arabic Language and Speech Tutor
- Authors: Sicong Shao, Saleem Alharir, Salim Hariri, Pratik Satam, Sonia Shiri,
Abdessamad Mbarki
- Abstract summary: We present our approach for developing an Artificial Intelligence-based Arabic Language and Speech Tutor (AI-ALST)
The AI-ALST system is an intelligent tutor that provides analysis and assessment of students learning the Moroccan dialect at University of Arizona (UA)
The AI-ALST provides a self-learned environment to practice each lesson for pronunciation training.
- Score: 1.7616042687330644
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the past decade, we have observed a growing interest in using technologies
such as artificial intelligence (AI), machine learning, and chatbots to provide
assistance to language learners, especially in second language learning. By
using AI and natural language processing (NLP) and chatbots, we can create an
intelligent self-learning environment that goes beyond multiple-choice
questions and/or fill in the blank exercises. In addition, NLP allows for
learning to be adaptive in that it offers more than an indication that an error
has occurred. It also provides a description of the error, uses linguistic
analysis to isolate the source of the error, and then suggests additional
drills to achieve optimal individualized learning outcomes. In this paper, we
present our approach for developing an Artificial Intelligence-based Arabic
Language and Speech Tutor (AI-ALST) for teaching the Moroccan Arabic dialect.
The AI-ALST system is an intelligent tutor that provides analysis and
assessment of students learning the Moroccan dialect at University of Arizona
(UA). The AI-ALST provides a self-learned environment to practice each lesson
for pronunciation training. In this paper, we present our initial experimental
evaluation of the AI-ALST that is based on MFCC (Mel frequency cepstrum
coefficient) feature extraction, bidirectional LSTM (Long Short-Term Memory),
attention mechanism, and a cost-based strategy for dealing with class-imbalance
learning. We evaluated our tutor on the word pronunciation of lesson 1 of the
Moroccan Arabic dialect class. The experimental results show that the AI-ALST
can effectively and successfully detect pronunciation errors and evaluate its
performance by using F_1-score, accuracy, precision, and recall.
Related papers
- Generative AI, Pragmatics, and Authenticity in Second Language Learning [0.0]
There are obvious benefits to integrating generative AI (artificial intelligence) into language learning and teaching.
However, due to how AI systems under-stand human language, they lack the lived experience to be able to use language with the same social awareness as humans.
There are built-in linguistic and cultural biases based on their training data which is mostly in English and predominantly from Western sources.
arXiv Detail & Related papers (2024-10-18T11:58:03Z) - Learning Phonotactics from Linguistic Informants [54.086544221761486]
Our model iteratively selects or synthesizes a data-point according to one of a range of information-theoretic policies.
We find that the information-theoretic policies that our model uses to select items to query the informant achieve sample efficiency comparable to, or greater than, fully supervised approaches.
arXiv Detail & Related papers (2024-05-08T00:18:56Z) - Distributed agency in second language learning and teaching through generative AI [0.0]
ChatGPT can provide informal second language practice through chats in written or voice forms.
Instructors can use AI to build learning and assessment materials in a variety of media.
arXiv Detail & Related papers (2024-03-29T14:55:40Z) - Lip Reading for Low-resource Languages by Learning and Combining General
Speech Knowledge and Language-specific Knowledge [57.38948190611797]
This paper proposes a novel lip reading framework, especially for low-resource languages.
Since low-resource languages do not have enough video-text paired data to train the model, it is regarded as challenging to develop lip reading models for low-resource languages.
arXiv Detail & Related papers (2023-08-18T05:19:03Z) - Bag of Tricks for Effective Language Model Pretraining and Downstream
Adaptation: A Case Study on GLUE [93.98660272309974]
This report briefly describes our submission Vega v1 on the General Language Understanding Evaluation leaderboard.
GLUE is a collection of nine natural language understanding tasks, including question answering, linguistic acceptability, sentiment analysis, text similarity, paraphrase detection, and natural language inference.
With our optimized pretraining and fine-tuning strategies, our 1.3 billion model sets new state-of-the-art on 4/9 tasks, achieving the best average score of 91.3.
arXiv Detail & Related papers (2023-02-18T09:26:35Z) - What Artificial Neural Networks Can Tell Us About Human Language
Acquisition [47.761188531404066]
Rapid progress in machine learning for natural language processing has the potential to transform debates about how humans learn language.
To increase the relevance of learnability results from computational models, we need to train model learners without significant advantages over humans.
arXiv Detail & Related papers (2022-08-17T00:12:37Z) - Autoencoding Language Model Based Ensemble Learning for Commonsense
Validation and Explanation [1.503974529275767]
We present an Autoencoding Language Model based Ensemble learning method for commonsense validation and explanation.
Our method can distinguish natural language statements that are against commonsense (validation subtask) and correctly identify the reason for making against commonsense (explanation selection subtask)
Experimental results on the benchmark dataset of SemEval-2020 Task 4 show that our method outperforms state-of-the-art models.
arXiv Detail & Related papers (2022-04-07T09:43:51Z) - Sequence-level self-learning with multiple hypotheses [53.04725240411895]
We develop new self-learning techniques with an attention-based sequence-to-sequence (seq2seq) model for automatic speech recognition (ASR)
In contrast to conventional unsupervised learning approaches, we adopt the emphmulti-task learning (MTL) framework.
Our experiment results show that our method can reduce the WER on the British speech data from 14.55% to 10.36% compared to the baseline model trained with the US English data only.
arXiv Detail & Related papers (2021-12-10T20:47:58Z) - Robotic Assistant Agent for Student and Machine Co-Learning on AI-FML
Practice with AIoT Application [0.487576911714538]
The structure of AI-FML contains fuzzy logic, neural network, and evolutionary computation.
The Robotic Assistant Agent (RAA) can assist students and machines in co-learning English and AI-FML practice.
arXiv Detail & Related papers (2021-05-11T13:19:06Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.