Optimizing Machine Translation through Prompt Engineering: An
Investigation into ChatGPT's Customizability
- URL: http://arxiv.org/abs/2308.01391v2
- Date: Wed, 21 Feb 2024 07:24:06 GMT
- Title: Optimizing Machine Translation through Prompt Engineering: An
Investigation into ChatGPT's Customizability
- Authors: Masaru Yamada
- Abstract summary: The study reveals that the inclusion of suitable prompts in large-scale language models like ChatGPT can yield flexible translations.
The research scrutinizes the changes in translation quality when prompts are used to generate translations that meet specific conditions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper explores the influence of integrating the purpose of the
translation and the target audience into prompts on the quality of translations
produced by ChatGPT. Drawing on previous translation studies, industry
practices, and ISO standards, the research underscores the significance of the
pre-production phase in the translation process. The study reveals that the
inclusion of suitable prompts in large-scale language models like ChatGPT can
yield flexible translations, a feat yet to be realized by conventional Machine
Translation (MT). The research scrutinizes the changes in translation quality
when prompts are used to generate translations that meet specific conditions.
The evaluation is conducted from a practicing translator's viewpoint, both
subjectively and qualitatively, supplemented by the use of OpenAI's word
embedding API for cosine similarity calculations. The findings suggest that the
integration of the purpose and target audience into prompts can indeed modify
the generated translations, generally enhancing the translation quality by
industry standards. The study also demonstrates the practical application of
the "good translation" concept, particularly in the context of marketing
documents and culturally dependent idioms.
Related papers
- Questionnaires for Everyone: Streamlining Cross-Cultural Questionnaire Adaptation with GPT-Based Translation Quality Evaluation [6.8731197511363415]
This work presents a prototype tool that can expedite the questionnaire translation process.
The tool incorporates forward-backward translation using DeepL alongside GPT-4-generated translation quality evaluations and improvement suggestions.
arXiv Detail & Related papers (2024-07-30T07:34:40Z) - Prompting ChatGPT for Translation: A Comparative Analysis of Translation Brief and Persona Prompts [0.0]
This paper discusses the effectiveness of incorporating the conceptual tool of translation brief and the personas of translator and author into prompt design for translation tasks in ChatGPT.
Findings suggest that, although certain elements are constructive in facilitating human-to-human communication for translation tasks, their effectiveness is limited for improving translation quality in ChatGPT.
This accentuates the need for explorative research on how translation theorists and practitioners can develop the current set of conceptual tools rooted in the human-to-human communication paradigm for translation purposes in this emerging workflow involving human-to-human interaction.
arXiv Detail & Related papers (2024-02-29T21:05:38Z) - Gradable ChatGPT Translation Evaluation [7.697698018200632]
ChatGPT, as a language model based on large-scale pre-training, has a profound influence on the domain of machine translation.
The design of the translation prompt emerges as a key aspect that can wield influence over factors such as the style, precision and accuracy of the translation to a certain extent.
This paper proposes a generic taxonomy, which defines gradable translation prompts in terms of expression type, translation style, POS information and explicit statement.
arXiv Detail & Related papers (2024-01-18T13:58:10Z) - Lost in the Source Language: How Large Language Models Evaluate the Quality of Machine Translation [64.5862977630713]
This study investigates how Large Language Models (LLMs) leverage source and reference data in machine translation evaluation task.
We find that reference information significantly enhances the evaluation accuracy, while surprisingly, source information sometimes is counterproductive.
arXiv Detail & Related papers (2024-01-12T13:23:21Z) - Contextual Label Projection for Cross-Lingual Structured Prediction [103.55999471155104]
CLaP translates text to the target language and performs contextual translation on the labels using the translated text as the context.
We benchmark CLaP with other label projection techniques on zero-shot cross-lingual transfer across 39 languages.
arXiv Detail & Related papers (2023-09-16T10:27:28Z) - Decomposed Prompting for Machine Translation Between Related Languages
using Large Language Models [55.35106713257871]
We introduce DecoMT, a novel approach of few-shot prompting that decomposes the translation process into a sequence of word chunk translations.
We show that DecoMT outperforms the strong few-shot prompting BLOOM model with an average improvement of 8 chrF++ scores across the examined languages.
arXiv Detail & Related papers (2023-05-22T14:52:47Z) - How to Design Translation Prompts for ChatGPT: An Empirical Study [18.678893287863033]
ChatGPT has demonstrated surprising abilities in natural language understanding and natural language generation.
We adopt several translation prompts on a wide range of translations.
Our work provides empirical evidence that ChatGPT still has great potential in translations.
arXiv Detail & Related papers (2023-04-05T01:17:59Z) - A Bayesian approach to translators' reliability assessment [0.0]
We consider the Translation Quality Assessment process as a complex process, considering it from the physics of complex systems point of view.
We build two Bayesian models that parameterise the features involved in the TQA process, namely the translation difficulty, the characteristics of the translators involved in producing the translation and assessing its quality.
We show that reviewers reliability cannot be taken for granted even if they are expert translators.
arXiv Detail & Related papers (2022-03-14T14:29:45Z) - Measuring Uncertainty in Translation Quality Evaluation (TQE) [62.997667081978825]
This work carries out motivated research to correctly estimate the confidence intervals citeBrown_etal2001Interval depending on the sample size of the translated text.
The methodology we applied for this work is from Bernoulli Statistical Distribution Modelling (BSDM) and Monte Carlo Sampling Analysis (MCSA)
arXiv Detail & Related papers (2021-11-15T12:09:08Z) - Contextual Neural Machine Translation Improves Translation of Cataphoric
Pronouns [50.245845110446496]
We investigate the effect of future sentences as context by comparing the performance of a contextual NMT model trained with the future context to the one trained with the past context.
Our experiments and evaluation, using generic and pronoun-focused automatic metrics, show that the use of future context achieves significant improvements over the context-agnostic Transformer.
arXiv Detail & Related papers (2020-04-21T10:45:48Z) - Multilingual Alignment of Contextual Word Representations [49.42244463346612]
BERT exhibits significantly improved zero-shot performance on XNLI compared to the base model.
We introduce a contextual version of word retrieval and show that it correlates well with downstream zero-shot transfer.
These results support contextual alignment as a useful concept for understanding large multilingual pre-trained models.
arXiv Detail & Related papers (2020-02-10T03:27:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.