Computational linguistic assessment of textbook and online learning
media by means of threshold concepts in business education
- URL: http://arxiv.org/abs/2008.02096v1
- Date: Wed, 5 Aug 2020 12:56:16 GMT
- Title: Computational linguistic assessment of textbook and online learning
media by means of threshold concepts in business education
- Authors: Andy L\"ucking and Sebastian Br\"uckner and Giuseppe Abrami and Tolga
Uslu and Alexander Mehler
- Abstract summary: From a linguistic perspective, threshold concepts are instances of specialized vocabularies, exhibiting particular linguistic features.
The profiles of 63 threshold concepts from business education have been investigated in textbooks, newspapers, and Wikipedia.
The three kinds of resources can indeed be distinguished in terms of their threshold concepts' profiles.
- Score: 59.003956312175795
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Threshold concepts are key terms in domain-based knowledge acquisition. They
are regarded as building blocks of the conceptual development of domain
knowledge within particular learners. From a linguistic perspective, however,
threshold concepts are instances of specialized vocabularies, exhibiting
particular linguistic features. Threshold concepts are typically used in
specialized texts such as textbooks -- that is, within a formal learning
environment. However, they also occur in informal learning environments like
newspapers. In this article, a first approach is taken to combine both lines
into an overarching research program - that is, to provide a computational
linguistic assessment of different resources, including in particular online
resources, by means of threshold concepts. To this end, the distributive
profiles of 63 threshold concepts from business education (which have been
collected from threshold concept research) has been investigated in three kinds
of (German) resources, namely textbooks, newspapers, and Wikipedia. Wikipedia
is (one of) the largest and most widely used online resources. We looked at the
threshold concepts' frequency distribution, their compound distribution, and
their network structure within the three kind of resources. The two main
findings can be summarized as follows: Firstly, the three kinds of resources
can indeed be distinguished in terms of their threshold concepts' profiles.
Secondly, Wikipedia definitely appears to be a formal learning resource.
Related papers
- Domain Embeddings for Generating Complex Descriptions of Concepts in
Italian Language [65.268245109828]
We propose a Distributional Semantic resource enriched with linguistic and lexical information extracted from electronic dictionaries.
The resource comprises 21 domain-specific matrices, one comprehensive matrix, and a Graphical User Interface.
Our model facilitates the generation of reasoned semantic descriptions of concepts by selecting matrices directly associated with concrete conceptual knowledge.
arXiv Detail & Related papers (2024-02-26T15:04:35Z) - Towards Open Vocabulary Learning: A Survey [146.90188069113213]
Deep neural networks have made impressive advancements in various core tasks like segmentation, tracking, and detection.
Recently, open vocabulary settings were proposed due to the rapid progress of vision language pre-training.
This paper provides a thorough review of open vocabulary learning, summarizing and analyzing recent developments in the field.
arXiv Detail & Related papers (2023-06-28T02:33:06Z) - Efficient Induction of Language Models Via Probabilistic Concept
Formation [13.632454840363916]
We present a novel approach to the acquisition of language models from corpora.
The framework builds on Cobweb, an early system for constructing taxonomic hierarchies of probabilistic concepts.
We explore three new extensions to Cobweb -- the Word, Leaf, and Path variants.
arXiv Detail & Related papers (2022-12-22T18:16:58Z) - Joint Language Semantic and Structure Embedding for Knowledge Graph
Completion [66.15933600765835]
We propose to jointly embed the semantics in the natural language description of the knowledge triplets with their structure information.
Our method embeds knowledge graphs for the completion task via fine-tuning pre-trained language models.
Our experiments on a variety of knowledge graph benchmarks have demonstrated the state-of-the-art performance of our method.
arXiv Detail & Related papers (2022-09-19T02:41:02Z) - O-Dang! The Ontology of Dangerous Speech Messages [53.15616413153125]
We present O-Dang!: The Ontology of Dangerous Speech Messages, a systematic and interoperable Knowledge Graph (KG)
O-Dang! is designed to gather and organize Italian datasets into a structured KG, according to the principles shared within the Linguistic Linked Open Data community.
It provides a model for encoding both gold standard and single-annotator labels in the KG.
arXiv Detail & Related papers (2022-07-13T11:50:05Z) - Expedition: A System for the Unsupervised Learning of a Hierarchy of
Concepts [0.522145960878624]
We present a system for bottom-up cumulative learning of myriad concepts corresponding to meaningful character strings.
The learning is self-supervised in that the concepts discovered are used as predictors as well as targets of prediction.
We devise an objective for segmenting with the learned concepts, derived from comparing to a baseline prediction system.
arXiv Detail & Related papers (2021-12-17T06:49:18Z) - Employing distributional semantics to organize task-focused vocabulary
learning [2.1320960069210475]
We explore how computational linguistic methods can be combined with graph-based learner models to answer this question.
Based on the highly structured learner model and concepts from network analysis, the learner is guided to efficiently explore the targeted lexical space.
arXiv Detail & Related papers (2020-11-22T21:51:19Z) - Inferential Text Generation with Multiple Knowledge Sources and
Meta-Learning [117.23425857240679]
We study the problem of generating inferential texts of events for a variety of commonsense like textitif-else relations.
Existing approaches typically use limited evidence from training examples and learn for each relation individually.
In this work, we use multiple knowledge sources as fuels for the model.
arXiv Detail & Related papers (2020-04-07T01:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.