Including Signed Languages in Natural Language Processing
- URL: http://arxiv.org/abs/2105.05222v1
- Date: Tue, 11 May 2021 17:37:55 GMT
- Title: Including Signed Languages in Natural Language Processing
- Authors: Kayo Yin, Amit Moryossef, Julie Hochgesang, Yoav Goldberg, Malihe
Alikhani
- Abstract summary: Signed languages are the primary means of communication for many deaf and hard of hearing individuals.
This position paper calls on the NLP community to include signed languages as a research area with high social and scientific impact.
- Score: 48.62744923724317
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Signed languages are the primary means of communication for many deaf and
hard of hearing individuals. Since signed languages exhibit all the fundamental
linguistic properties of natural language, we believe that tools and theories
of Natural Language Processing (NLP) are crucial towards its modeling. However,
existing research in Sign Language Processing (SLP) seldom attempt to explore
and leverage the linguistic organization of signed languages. This position
paper calls on the NLP community to include signed languages as a research area
with high social and scientific impact. We first discuss the linguistic
properties of signed languages to consider during their modeling. Then, we
review the limitations of current SLP models and identify the open challenges
to extend NLP to signed languages. Finally, we urge (1) the adoption of an
efficient tokenization method; (2) the development of linguistically-informed
models; (3) the collection of real-world signed language data; (4) the
inclusion of local signed language communities as an active and leading voice
in the direction of research.
Related papers
- Natural Language Processing RELIES on Linguistics [13.142686158720021]
We argue the acronym RELIES that encapsulates six major facets where linguistics contributes to NLP.
This list is not exhaustive, nor is linguistics the main point of reference for every effort under these themes.
arXiv Detail & Related papers (2024-05-09T17:59:32Z) - Natural Language Processing for Dialects of a Language: A Survey [56.93337350526933]
State-of-the-art natural language processing (NLP) models are trained on massive training corpora, and report a superlative performance on evaluation datasets.
This survey delves into an important attribute of these datasets: the dialect of a language.
Motivated by the performance degradation of NLP models for dialectic datasets and its implications for the equity of language technologies, we survey past research in NLP for dialects in terms of datasets, and approaches.
arXiv Detail & Related papers (2024-01-11T03:04:38Z) - Ling-CL: Understanding NLP Models through Linguistic Curricula [17.44112549879293]
We employ a characterization of linguistic complexity from psycholinguistic and language acquisition research.
We develop data-driven curricula to understand the underlying linguistic knowledge that models learn to address NLP tasks.
arXiv Detail & Related papers (2023-10-31T01:44:33Z) - Quantifying the Dialect Gap and its Correlates Across Languages [69.18461982439031]
This work will lay the foundation for furthering the field of dialectal NLP by laying out evident disparities and identifying possible pathways for addressing them through mindful data collection.
arXiv Detail & Related papers (2023-10-23T17:42:01Z) - Soft Language Clustering for Multilingual Model Pre-training [57.18058739931463]
We propose XLM-P, which contextually retrieves prompts as flexible guidance for encoding instances conditionally.
Our XLM-P enables (1) lightweight modeling of language-invariant and language-specific knowledge across languages, and (2) easy integration with other multilingual pre-training methods.
arXiv Detail & Related papers (2023-06-13T08:08:08Z) - LERT: A Linguistically-motivated Pre-trained Language Model [67.65651497173998]
We propose LERT, a pre-trained language model that is trained on three types of linguistic features along with the original pre-training task.
We carried out extensive experiments on ten Chinese NLU tasks, and the experimental results show that LERT could bring significant improvements.
arXiv Detail & Related papers (2022-11-10T05:09:16Z) - All You Need In Sign Language Production [50.3955314892191]
Sign language recognition and production need to cope with some critical challenges.
We present an introduction to the Deaf culture, Deaf centers, psychological perspective of sign language.
Also, the backbone architectures and methods in SLP are briefly introduced and the proposed taxonomy on SLP is presented.
arXiv Detail & Related papers (2022-01-05T13:45:09Z) - OkwuGb\'e: End-to-End Speech Recognition for Fon and Igbo [0.015863809575305417]
We present a state-of-art ASR model for Fon, as well as benchmark ASR model results for Igbo.
We conduct a comprehensive linguistic analysis of each language and describe the creation of end-to-end, deep neural network-based speech recognition models for both languages.
arXiv Detail & Related papers (2021-03-13T18:02:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.