Word segmentation granularity in Korean
- URL: http://arxiv.org/abs/2309.03713v1
- Date: Thu, 7 Sep 2023 13:42:05 GMT
- Title: Word segmentation granularity in Korean
- Authors: Jungyeul Park, Mija Kim
- Abstract summary: There are multiple possible levels of word segmentation granularity in Korean.
For specific language processing and corpus annotation tasks, several different granularity levels have been proposed and utilized.
Interestingly, the granularity by separating only functional morphemes results in the optimal performance for phrase structure parsing.
- Score: 1.0619039878979954
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper describes word {segmentation} granularity in Korean language
processing. From a word separated by blank space, which is termed an eojeol, to
a sequence of morphemes in Korean, there are multiple possible levels of word
segmentation granularity in Korean. For specific language processing and corpus
annotation tasks, several different granularity levels have been proposed and
utilized, because the agglutinative languages including Korean language have a
one-to-one mapping between functional morpheme and syntactic category. Thus, we
analyze these different granularity levels, presenting the examples of Korean
language processing systems for future reference. Interestingly, the
granularity by separating only functional morphemes including case markers and
verbal endings, and keeping other suffixes for morphological derivation results
in the optimal performance for phrase structure parsing. This contradicts
previous best practices for Korean language processing, which has been the de
facto standard for various applications that require separating all morphemes.
Related papers
- Does Incomplete Syntax Influence Korean Language Model? Focusing on Word Order and Case Markers [7.275938266030414]
Syntactic elements, such as word order and case markers, are fundamental in natural language processing.
This study explores whether Korean language models can accurately capture this flexibility.
arXiv Detail & Related papers (2024-07-12T11:33:41Z) - MAGNET: Improving the Multilingual Fairness of Language Models with Adaptive Gradient-Based Tokenization [75.2540291039202]
In multilingual settings, non-Latin scripts and low-resource languages are usually disadvantaged in terms of language models' utility, efficiency, and cost.
We propose multilingual adaptive gradient-based tokenization to reduce over-segmentation via adaptive gradient-based subword tokenization.
arXiv Detail & Related papers (2024-07-11T18:59:21Z) - Decomposed Prompting for Machine Translation Between Related Languages
using Large Language Models [55.35106713257871]
We introduce DecoMT, a novel approach of few-shot prompting that decomposes the translation process into a sequence of word chunk translations.
We show that DecoMT outperforms the strong few-shot prompting BLOOM model with an average improvement of 8 chrF++ scores across the examined languages.
arXiv Detail & Related papers (2023-05-22T14:52:47Z) - K-UniMorph: Korean Universal Morphology and its Feature Schema [1.3048920509133806]
We present a new Universal Morphology dataset for Korean.
We outline each grammatical criterion in detail for the verbal endings, clarify how to extract inflected forms, and demonstrate how we generate the morphological schemata.
We carry out the inflection task using three different Korean word forms: letters, syllables and morphemes.
arXiv Detail & Related papers (2023-05-10T17:44:01Z) - Korean Named Entity Recognition Based on Language-Specific Features [3.1884260020646265]
We propose a novel way of improving named entity recognition in the Korean language using its language-specific features.
The proposed scheme decomposes Korean words into morphemes and reduces the ambiguity of named entities.
Analyses of the results of statistical and neural models reveal that the proposed morpheme-based format is feasible.
arXiv Detail & Related papers (2023-05-10T17:34:52Z) - A Massively Multilingual Analysis of Cross-linguality in Shared
Embedding Space [61.18554842370824]
In cross-lingual language models, representations for many different languages live in the same space.
We compute a task-based measure of cross-lingual alignment in the form of bitext retrieval performance.
We examine a range of linguistic, quasi-linguistic, and training-related features as potential predictors of these alignment metrics.
arXiv Detail & Related papers (2021-09-13T21:05:37Z) - Augmenting Part-of-speech Tagging with Syntactic Information for
Vietnamese and Chinese [0.32228025627337864]
We implement the idea to improve word segmentation and part of speech tagging of the Vietnamese language by employing a simplified constituency.
Our neural model for joint word segmentation and part-of-speech tagging has the architecture of the syllable-based constituency.
This model can be augmented with predicted word boundary and part-of-speech tags by other tools.
arXiv Detail & Related papers (2021-02-24T08:57:02Z) - Revisiting Language Encoding in Learning Multilingual Representations [70.01772581545103]
We propose a new approach called Cross-lingual Language Projection (XLP) to replace language embedding.
XLP projects the word embeddings into language-specific semantic space, and then the projected embeddings will be fed into the Transformer model.
Experiments show that XLP can freely and significantly boost the model performance on extensive multilingual benchmark datasets.
arXiv Detail & Related papers (2021-02-16T18:47:10Z) - Automatic Extraction of Rules Governing Morphological Agreement [103.78033184221373]
We develop an automated framework for extracting a first-pass grammatical specification from raw text.
We focus on extracting rules describing agreement, a morphosyntactic phenomenon at the core of the grammars of many of the world's languages.
We apply our framework to all languages included in the Universal Dependencies project, with promising results.
arXiv Detail & Related papers (2020-10-02T18:31:45Z) - Are All Good Word Vector Spaces Isomorphic? [79.04509759167952]
We show that variance in performance across language pairs is not only due to typological differences, but can mostly be attributed to the size of the monolingual resources available.
arXiv Detail & Related papers (2020-04-08T15:49:19Z) - Morphological Word Segmentation on Agglutinative Languages for Neural
Machine Translation [8.87546236839959]
We propose a morphological word segmentation method on the source-side for Neural machine translation (NMT)
It incorporates morphology knowledge to preserve the linguistic and semantic information in the word structure while reducing the vocabulary size at training time.
It can be utilized as a preprocessing tool to segment the words in agglutinative languages for other natural language processing (NLP) tasks.
arXiv Detail & Related papers (2020-01-02T10:05:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.