Teacher Perception of Automatically Extracted Grammar Concepts for L2
Language Learning
- URL: http://arxiv.org/abs/2310.18417v1
- Date: Fri, 27 Oct 2023 18:17:29 GMT
- Title: Teacher Perception of Automatically Extracted Grammar Concepts for L2
Language Learning
- Authors: Aditi Chaudhary, Arun Sampath, Ashwin Sheshadri, Antonios
Anastasopoulos, Graham Neubig
- Abstract summary: We apply this work to teaching two Indian languages, Kannada and Marathi, which do not have well-developed resources for second language learning.
We extract descriptions from a natural text corpus that answer questions about morphosyntax (learning of word order, agreement, case marking, or word formation) and semantics (learning of vocabulary).
We enlist the help of language educators from schools in North America to perform a manual evaluation, who find the materials have potential to be used for their lesson preparation and learner evaluation.
- Score: 66.79173000135717
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the challenges in language teaching is how best to organize rules
regarding syntax, semantics, or phonology in a meaningful manner. This not only
requires content creators to have pedagogical skills, but also have that
language's deep understanding. While comprehensive materials to develop such
curricula are available in English and some broadly spoken languages, for many
other languages, teachers need to manually create them in response to their
students' needs. This is challenging because i) it requires that such experts
be accessible and have the necessary resources, and ii) describing all the
intricacies of a language is time-consuming and prone to omission. In this
work, we aim to facilitate this process by automatically discovering and
visualizing grammar descriptions. We extract descriptions from a natural text
corpus that answer questions about morphosyntax (learning of word order,
agreement, case marking, or word formation) and semantics (learning of
vocabulary). We apply this method for teaching two Indian languages, Kannada
and Marathi, which, unlike English, do not have well-developed resources for
second language learning. To assess the perceived utility of the extracted
material, we enlist the help of language educators from schools in North
America to perform a manual evaluation, who find the materials have potential
to be used for their lesson preparation and learner evaluation.
Related papers
- Learning Language Structures through Grounding [8.437466837766895]
We consider a family of machine learning tasks that aim to learn language structures through grounding.
In Part I, we consider learning syntactic parses through visual grounding.
In Part II, we propose two execution-aware methods to map sentences into corresponding semantic structures.
In Part III, we propose methods that learn language structures from annotations in other languages.
arXiv Detail & Related papers (2024-06-14T02:21:53Z) - BabySLM: language-acquisition-friendly benchmark of self-supervised
spoken language models [56.93604813379634]
Self-supervised techniques for learning speech representations have been shown to develop linguistic competence from exposure to speech without the need for human labels.
We propose a language-acquisition-friendly benchmark to probe spoken language models at the lexical and syntactic levels.
We highlight two exciting challenges that need to be addressed for further progress: bridging the gap between text and speech and between clean speech and in-the-wild speech.
arXiv Detail & Related papers (2023-06-02T12:54:38Z) - A Survey of Knowledge Enhanced Pre-trained Language Models [78.56931125512295]
We present a comprehensive review of Knowledge Enhanced Pre-trained Language Models (KE-PLMs)
For NLU, we divide the types of knowledge into four categories: linguistic knowledge, text knowledge, knowledge graph (KG) and rule knowledge.
The KE-PLMs for NLG are categorized into KG-based and retrieval-based methods.
arXiv Detail & Related papers (2022-11-11T04:29:02Z) - Rethinking Annotation: Can Language Learners Contribute? [13.882919101548811]
In this paper, we investigate whether language learners can contribute annotations to benchmark datasets.
We target three languages, English, Korean, and Indonesian, and the four NLP tasks of sentiment analysis, natural language inference, named entity recognition, and machine reading comprehension.
We find that language learners, especially those with intermediate or advanced levels of language proficiency, are able to provide fairly accurate labels with the help of additional resources.
arXiv Detail & Related papers (2022-10-13T08:22:25Z) - Teacher Perception of Automatically Extracted Grammar Concepts for L2
Language Learning [91.49622922938681]
We present an automatic framework that automatically discovers and visualizing descriptions of different aspects of grammar.
Specifically, we extract descriptions from a natural text corpus that answer questions about morphosyntax and semantics.
We apply this method for teaching the Indian languages, Kannada and Marathi, which, unlike English, do not have well-developed pedagogical resources.
arXiv Detail & Related papers (2022-06-10T14:52:22Z) - AUTOLEX: An Automatic Framework for Linguistic Exploration [93.89709486642666]
We propose an automatic framework that aims to ease linguists' discovery and extraction of concise descriptions of linguistic phenomena.
Specifically, we apply this framework to extract descriptions for three phenomena: morphological agreement, case marking, and word order.
We evaluate the descriptions with the help of language experts and propose a method for automated evaluation when human evaluation is infeasible.
arXiv Detail & Related papers (2022-03-25T20:37:30Z) - Exploring Teacher-Student Learning Approach for Multi-lingual
Speech-to-Intent Classification [73.5497360800395]
We develop an end-to-end system that supports multiple languages.
We exploit knowledge from a pre-trained multi-lingual natural language processing model.
arXiv Detail & Related papers (2021-09-28T04:43:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.