User Study for Improving Tools for Bible Translation
- URL: http://arxiv.org/abs/2302.00778v1
- Date: Wed, 1 Feb 2023 22:22:03 GMT
- Title: User Study for Improving Tools for Bible Translation
- Authors: Joel Mathew, Ulf Hermjakob
- Abstract summary: Technology has increasingly become an integral part of the Bible translation process.
Recent advances in AI could potentially play a pivotal role in reducing translation time and improving overall quality.
- Score: 2.7514191327409714
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Technology has increasingly become an integral part of the Bible translation
process. Over time, both the translation process and relevant technology have
evolved greatly. More recently, the field of Natural Language Processing (NLP)
has made great progress in solving some problems previously thought
impenetrable. Through this study we endeavor to better understand and
communicate about a segment of the current landscape of the Bible translation
process as it relates to technology and identify pertinent issues. We conduct
several interviews with individuals working in different levels of the Bible
translation process from multiple organizations to identify gaps and
bottlenecks where technology (including recent advances in AI) could
potentially play a pivotal role in reducing translation time and improving
overall quality.
Related papers
- (Perhaps) Beyond Human Translation: Harnessing Multi-Agent Collaboration for Translating Ultra-Long Literary Texts [52.18246881218829]
We introduce a novel multi-agent framework based on large language models (LLMs) for literary translation, implemented as a company called TransAgents.
To evaluate the effectiveness of our system, we propose two innovative evaluation strategies: Monolingual Human Preference (MHP) and Bilingual LLM Preference (BLP)
arXiv Detail & Related papers (2024-05-20T05:55:08Z) - Rethinking and Improving Multi-task Learning for End-to-end Speech
Translation [51.713683037303035]
We investigate the consistency between different tasks, considering different times and modules.
We find that the textual encoder primarily facilitates cross-modal conversion, but the presence of noise in speech impedes the consistency between text and speech representations.
We propose an improved multi-task learning (IMTL) approach for the ST task, which bridges the modal gap by mitigating the difference in length and representation.
arXiv Detail & Related papers (2023-11-07T08:48:46Z) - NSOAMT -- New Search Only Approach to Machine Translation [0.0]
A "new search only approach to machine translation" was adopted to tackle some of the slowness and inaccuracy of the other technologies.
The idea is to develop a solution that, by indexing an incremental set of words that combine a certain semantic meaning, makes it possible to create a process of correspondence between their native language record and the language of translation.
arXiv Detail & Related papers (2023-09-19T11:12:21Z) - Challenges in Context-Aware Neural Machine Translation [39.89082986080746]
Context-aware neural machine translation involves leveraging information beyond sentence-level context to resolve discourse dependencies.
Despite well-reasoned intuitions, most context-aware translation models show only modest improvements over sentence-level systems.
We propose a more realistic setting for document-level translation, called paragraph-to-paragraph (para2para) translation.
arXiv Detail & Related papers (2023-05-23T07:08:18Z) - The Best of Both Worlds: Combining Human and Machine Translations for
Multilingual Semantic Parsing with Active Learning [50.320178219081484]
We propose an active learning approach that exploits the strengths of both human and machine translations.
An ideal utterance selection can significantly reduce the error and bias in the translated data.
arXiv Detail & Related papers (2023-05-22T05:57:47Z) - Deep Transfer Learning & Beyond: Transformer Language Models in
Information Systems Research [0.913755431537592]
Recent progress in natural language processing involving transformer language models (TLMs) offers a potential avenue for AI-driven business and societal transformation.
We review this recent progress as well as recent literature utilizing text mining in top IS journals to develop an outline for how future IS research can benefit from these new techniques.
arXiv Detail & Related papers (2021-10-18T02:01:39Z) - Systematic Inequalities in Language Technology Performance across the
World's Languages [94.65681336393425]
We introduce a framework for estimating the global utility of language technologies.
Our analyses involve the field at large, but also more in-depth studies on both user-facing technologies and more linguistic NLP tasks.
arXiv Detail & Related papers (2021-10-13T14:03:07Z) - Improving Speech Translation by Understanding and Learning from the
Auxiliary Text Translation Task [26.703809355057224]
We conduct a detailed analysis to understand the impact of the auxiliary task on the primary task within the multitask learning framework.
Our analysis confirms that multitask learning tends to generate similar decoder representations from different modalities.
Inspired by these findings, we propose three methods to improve translation quality.
arXiv Detail & Related papers (2021-07-12T23:53:40Z) - Sign Language Transformers: Joint End-to-end Sign Language Recognition
and Translation [59.38247587308604]
We introduce a novel transformer based architecture that jointly learns Continuous Sign Language Recognition and Translation.
We evaluate the recognition and translation performances of our approaches on the challenging RWTH-PHOENIX-Weather-2014T dataset.
Our translation networks outperform both sign video to spoken language and gloss to spoken language translation models.
arXiv Detail & Related papers (2020-03-30T21:35:09Z) - Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer [64.22926988297685]
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP)
In this paper, we explore the landscape of introducing transfer learning techniques for NLP by a unified framework that converts all text-based language problems into a text-to-text format.
arXiv Detail & Related papers (2019-10-23T17:37:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.