How to Design Translation Prompts for ChatGPT: An Empirical Study
- URL: http://arxiv.org/abs/2304.02182v2
- Date: Fri, 21 Apr 2023 09:35:44 GMT
- Title: How to Design Translation Prompts for ChatGPT: An Empirical Study
- Authors: Yuan Gao, Ruili Wang, Feng Hou
- Abstract summary: ChatGPT has demonstrated surprising abilities in natural language understanding and natural language generation.
We adopt several translation prompts on a wide range of translations.
Our work provides empirical evidence that ChatGPT still has great potential in translations.
- Score: 18.678893287863033
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recently released ChatGPT has demonstrated surprising abilities in
natural language understanding and natural language generation. Machine
translation relies heavily on the abilities of language understanding and
generation. Thus, in this paper, we explore how to assist machine translation
with ChatGPT. We adopt several translation prompts on a wide range of
translations. Our experimental results show that ChatGPT with designed
translation prompts can achieve comparable or better performance over
commercial translation systems for high-resource language translations. We
further evaluate the translation quality using multiple references, and ChatGPT
achieves superior performance compared to commercial systems. We also conduct
experiments on domain-specific translations, the final results show that
ChatGPT is able to comprehend the provided domain keyword and adjust
accordingly to output proper translations. At last, we perform few-shot prompts
that show consistent improvement across different base prompts. Our work
provides empirical evidence that ChatGPT still has great potential in
translations.
Related papers
- Prompting ChatGPT for Translation: A Comparative Analysis of Translation Brief and Persona Prompts [0.0]
This paper discusses the effectiveness of incorporating the conceptual tool of translation brief and the personas of translator and author into prompt design for translation tasks in ChatGPT.
Findings suggest that, although certain elements are constructive in facilitating human-to-human communication for translation tasks, their effectiveness is limited for improving translation quality in ChatGPT.
This accentuates the need for explorative research on how translation theorists and practitioners can develop the current set of conceptual tools rooted in the human-to-human communication paradigm for translation purposes in this emerging workflow involving human-to-human interaction.
arXiv Detail & Related papers (2024-02-29T21:05:38Z) - Optimizing Machine Translation through Prompt Engineering: An
Investigation into ChatGPT's Customizability [0.0]
The study reveals that the inclusion of suitable prompts in large-scale language models like ChatGPT can yield flexible translations.
The research scrutinizes the changes in translation quality when prompts are used to generate translations that meet specific conditions.
arXiv Detail & Related papers (2023-08-02T19:11:04Z) - Do GPTs Produce Less Literal Translations? [20.095646048167612]
Large Language Models (LLMs) have emerged as general-purpose language models capable of addressing many natural language generation or understanding tasks.
We find that translations out of English (E-X) from GPTs tend to be less literal, while exhibiting similar or better scores on Machine Translation quality metrics.
arXiv Detail & Related papers (2023-05-26T10:38:31Z) - Decomposed Prompting for Machine Translation Between Related Languages
using Large Language Models [55.35106713257871]
We introduce DecoMT, a novel approach of few-shot prompting that decomposes the translation process into a sequence of word chunk translations.
We show that DecoMT outperforms the strong few-shot prompting BLOOM model with an average improvement of 8 chrF++ scores across the examined languages.
arXiv Detail & Related papers (2023-05-22T14:52:47Z) - ChatGPT Beyond English: Towards a Comprehensive Evaluation of Large
Language Models in Multilingual Learning [70.57126720079971]
Large language models (LLMs) have emerged as the most important breakthroughs in natural language processing (NLP)
This paper evaluates ChatGPT on 7 different tasks, covering 37 diverse languages with high, medium, low, and extremely low resources.
Compared to the performance of previous models, our extensive experimental results demonstrate a worse performance of ChatGPT for different NLP tasks and languages.
arXiv Detail & Related papers (2023-04-12T05:08:52Z) - ParroT: Translating during Chat using Large Language Models tuned with
Human Translation and Feedback [90.20262941911027]
ParroT is a framework to enhance and regulate the translation abilities during chat.
Specifically, ParroT reformulates translation data into the instruction-following style.
We propose three instruction types for finetuning ParroT models, including translation instruction, contrastive instruction, and error-guided instruction.
arXiv Detail & Related papers (2023-04-05T13:12:00Z) - Towards Making the Most of ChatGPT for Machine Translation [75.576405098545]
ChatGPT shows remarkable capabilities for machine translation (MT)
Several prior studies have shown that it achieves comparable results to commercial systems for high-resource languages.
arXiv Detail & Related papers (2023-03-24T03:35:21Z) - Is ChatGPT A Good Translator? Yes With GPT-4 As The Engine [97.8609714773255]
We evaluate ChatGPT for machine translation, including translation prompt, multilingual translation, and translation robustness.
ChatGPT performs competitively with commercial translation products but lags behind significantly on low-resource or distant languages.
With the launch of the GPT-4 engine, the translation performance of ChatGPT is significantly boosted.
arXiv Detail & Related papers (2023-01-20T08:51:36Z) - Sign Language Transformers: Joint End-to-end Sign Language Recognition
and Translation [59.38247587308604]
We introduce a novel transformer based architecture that jointly learns Continuous Sign Language Recognition and Translation.
We evaluate the recognition and translation performances of our approaches on the challenging RWTH-PHOENIX-Weather-2014T dataset.
Our translation networks outperform both sign video to spoken language and gloss to spoken language translation models.
arXiv Detail & Related papers (2020-03-30T21:35:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.