A Subword Guided Neural Word Segmentation Model for Sindhi
- URL: http://arxiv.org/abs/2012.15079v1
- Date: Wed, 30 Dec 2020 08:31:31 GMT
- Title: A Subword Guided Neural Word Segmentation Model for Sindhi
- Authors: Wazir Ali, Jay Kumar, Zenglin Xu, Congjian Luo, Junyu Lu, Junming
Shao, Rajesh Kumar, and Yazhou Ren
- Abstract summary: Deep neural networks employ multiple processing layers for learning text representations to alleviate the burden of manual feature engineering in Natural Language Processing (NLP)
In this paper, we investigate supervised Sindhi Word (SWS) using unlabeled data with a Subword Guided Word Segmenter (SGNWS) for Sindhi.
Our proposed SGNWS model achieves an F1 value of 98.51% without relying on feature engineering.
- Score: 24.489385463262188
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks employ multiple processing layers for learning text
representations to alleviate the burden of manual feature engineering in
Natural Language Processing (NLP). Such text representations are widely used to
extract features from unlabeled data. The word segmentation is a fundamental
and inevitable prerequisite for many languages. Sindhi is an under-resourced
language, whose segmentation is challenging as it exhibits space omission,
space insertion issues, and lacks the labeled corpus for segmentation. In this
paper, we investigate supervised Sindhi Word Segmentation (SWS) using unlabeled
data with a Subword Guided Neural Word Segmenter (SGNWS) for Sindhi. In order
to learn text representations, we incorporate subword representations to
recurrent neural architecture to capture word information at morphemic-level,
which takes advantage of Bidirectional Long-Short Term Memory (BiLSTM),
self-attention mechanism, and Conditional Random Field (CRF). Our proposed
SGNWS model achieves an F1 value of 98.51% without relying on feature
engineering. The empirical results demonstrate the benefits of the proposed
model over the existing Sindhi word segmenters.
Related papers
- From Open-Vocabulary to Vocabulary-Free Semantic Segmentation [78.62232202171919]
Open-vocabulary semantic segmentation enables models to identify novel object categories beyond their training data.
Current approaches still rely on manually specified class names as input, creating an inherent bottleneck in real-world applications.
This work proposes a Vocabulary-Free Semantic pipeline, eliminating the need for predefined class vocabularies.
arXiv Detail & Related papers (2025-02-17T15:17:08Z) - Pixel Sentence Representation Learning [67.4775296225521]
In this work, we conceptualize the learning of sentence-level textual semantics as a visual representation learning process.
We employ visually-grounded text perturbation methods like typos and word order shuffling, resonating with human cognitive patterns, and enabling perturbation to be perceived as continuous.
Our approach is further bolstered by large-scale unsupervised topical alignment training and natural language inference supervision.
arXiv Detail & Related papers (2024-02-13T02:46:45Z) - Disambiguation of Company names via Deep Recurrent Networks [101.90357454833845]
We propose a Siamese LSTM Network approach to extract -- via supervised learning -- an embedding of company name strings.
We analyse how an Active Learning approach to prioritise the samples to be labelled leads to a more efficient overall learning pipeline.
arXiv Detail & Related papers (2023-03-07T15:07:57Z) - Are word boundaries useful for unsupervised language learning? [13.049946284598935]
Words provide at least two kinds of relevant information: boundary information and meaningful units.
We show that word boundary information may be absent or unreliable in the case of speech input.
We show that gold boundaries can be replaced by automatically found ones obtained with an unsupervised segmentation algorithm.
arXiv Detail & Related papers (2022-10-06T14:49:42Z) - Between words and characters: A Brief History of Open-Vocabulary
Modeling and Tokenization in NLP [22.772546707304766]
We show how hybrid approaches of words and characters as well as subword-based approaches based on learned segmentation have been proposed and evaluated.
We conclude that there is and likely will never be a silver bullet singular solution for all applications.
arXiv Detail & Related papers (2021-12-20T13:04:18Z) - Joint Chinese Word Segmentation and Part-of-speech Tagging via Two-stage
Span Labeling [0.2624902795082451]
We propose a neural model named SpanSegTag for joint Chinese word segmentation and part-of-speech tagging.
Our experiments show that our BERT-based model SpanSegTag achieved competitive performances on the CTB5, CTB6, and UD datasets.
arXiv Detail & Related papers (2021-12-17T12:59:02Z) - On the Difficulty of Segmenting Words with Attention [32.97060026226872]
We show, however, that even on monolingual data this approach is brittle.
In experiments with different input types, data sizes, and segmentation algorithms, only models trained to predict phones from words succeed in the task.
arXiv Detail & Related papers (2021-09-21T11:37:08Z) - More Than Words: Collocation Tokenization for Latent Dirichlet
Allocation Models [71.42030830910227]
We propose a new metric for measuring the clustering quality in settings where the models differ.
We show that topics trained with merged tokens result in topic keys that are clearer, more coherent, and more effective at distinguishing topics than those unmerged models.
arXiv Detail & Related papers (2021-08-24T14:08:19Z) - UCPhrase: Unsupervised Context-aware Quality Phrase Tagging [63.86606855524567]
UCPhrase is a novel unsupervised context-aware quality phrase tagger.
We induce high-quality phrase spans as silver labels from consistently co-occurring word sequences.
We show that our design is superior to state-of-the-art pre-trained, unsupervised, and distantly supervised methods.
arXiv Detail & Related papers (2021-05-28T19:44:24Z) - R$^2$-Net: Relation of Relation Learning Network for Sentence Semantic
Matching [58.72111690643359]
We propose a Relation of Relation Learning Network (R2-Net) for sentence semantic matching.
We first employ BERT to encode the input sentences from a global perspective.
Then a CNN-based encoder is designed to capture keywords and phrase information from a local perspective.
To fully leverage labels for better relation information extraction, we introduce a self-supervised relation of relation classification task.
arXiv Detail & Related papers (2020-12-16T13:11:30Z) - Improving Chinese Segmentation-free Word Embedding With Unsupervised
Association Measure [3.9435648520559177]
segmentation-free word embedding model is proposed by collecting n-grams vocabulary via a novel unsupervised association measure called pointwise association with times information(PATI)
The proposed method leverages more latent information from the corpus and thus is able to collect more valid n-grams that have stronger cohesion as embedding targets in unsegmented language data, such as Chinese texts.
arXiv Detail & Related papers (2020-07-05T13:55:19Z) - BURT: BERT-inspired Universal Representation from Twin Structure [89.82415322763475]
BURT (BERT inspired Universal Representation from Twin Structure) is capable of generating universal, fixed-size representations for input sequences of any granularity.
Our proposed BURT adopts the Siamese network, learning sentence-level representations from natural language inference dataset and word/phrase-level representations from paraphrasing dataset.
We evaluate BURT across different granularities of text similarity tasks, including STS tasks, SemEval2013 Task 5(a) and some commonly used word similarity tasks.
arXiv Detail & Related papers (2020-04-29T04:01:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.