Agent-SiMT: Agent-assisted Simultaneous Machine Translation with Large Language Models
- URL: http://arxiv.org/abs/2406.06910v2
- Date: Wed, 12 Jun 2024 15:05:40 GMT
- Title: Agent-SiMT: Agent-assisted Simultaneous Machine Translation with Large Language Models
- Authors: Shoutao Guo, Shaolei Zhang, Zhengrui Ma, Min Zhang, Yang Feng,
- Abstract summary: Simultaneous Machine Translation (SiMT) generates target translations while reading the source sentence.
Existing SiMT methods generally adopt the traditional Transformer architecture, which concurrently determines the policy and generates translations.
We introduce Agent-SiMT, a framework combining the strengths of Large Language Models (LLMs) and traditional SiMT methods.
- Score: 38.49925017512848
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Simultaneous Machine Translation (SiMT) generates target translations while reading the source sentence. It relies on a policy to determine the optimal timing for reading sentences and generating translations. Existing SiMT methods generally adopt the traditional Transformer architecture, which concurrently determines the policy and generates translations. While they excel at determining policies, their translation performance is suboptimal. Conversely, Large Language Models (LLMs), trained on extensive corpora, possess superior generation capabilities, but it is difficult for them to acquire translation policy through the training methods of SiMT. Therefore, we introduce Agent-SiMT, a framework combining the strengths of LLMs and traditional SiMT methods. Agent-SiMT contains the policy-decision agent and the translation agent. The policy-decision agent is managed by a SiMT model, which determines the translation policy using partial source sentence and translation. The translation agent, leveraging an LLM, generates translation based on the partial source sentence. The two agents collaborate to accomplish SiMT. Experiments demonstrate that Agent-SiMT attains state-of-the-art performance.
Related papers
- Towards Zero-Shot Multimodal Machine Translation [64.9141931372384]
We propose a method to bypass the need for fully supervised data to train multimodal machine translation systems.
Our method, called ZeroMMT, consists in adapting a strong text-only machine translation (MT) model by training it on a mixture of two objectives.
To prove that our method generalizes to languages with no fully supervised training data available, we extend the CoMMuTE evaluation dataset to three new languages: Arabic, Russian and Chinese.
arXiv Detail & Related papers (2024-07-18T15:20:31Z) - TasTe: Teaching Large Language Models to Translate through Self-Reflection [82.83958470745381]
Large language models (LLMs) have exhibited remarkable performance in various natural language processing tasks.
We propose the TasTe framework, which stands for translating through self-reflection.
The evaluation results in four language directions on the WMT22 benchmark reveal the effectiveness of our approach compared to existing methods.
arXiv Detail & Related papers (2024-06-12T17:21:21Z) - (Perhaps) Beyond Human Translation: Harnessing Multi-Agent Collaboration for Translating Ultra-Long Literary Texts [52.18246881218829]
We introduce a novel multi-agent framework based on large language models (LLMs) for literary translation, implemented as a company called TransAgents.
To evaluate the effectiveness of our system, we propose two innovative evaluation strategies: Monolingual Human Preference (MHP) and Bilingual LLM Preference (BLP)
arXiv Detail & Related papers (2024-05-20T05:55:08Z) - SiLLM: Large Language Models for Simultaneous Machine Translation [41.303764786790616]
Simultaneous Machine Translation (SiMT) generates translations while reading the source sentence.
Existing SiMT methods employ a single model to concurrently determine the policy and generate the translations.
We propose SiLLM, which delegates the two sub-tasks to separate agents.
arXiv Detail & Related papers (2024-02-20T14:23:34Z) - Simultaneous Machine Translation with Tailored Reference [35.46823126036308]
Simultaneous machine translation (SiMT) generates translation while reading the whole source sentence.
Existing SiMT models are typically trained using the same reference disregarding the varying amounts of available source information at different latency.
We propose a novel method that provides tailored reference for the SiMT models trained at different latency by rephrasing the ground-truth.
arXiv Detail & Related papers (2023-10-20T15:32:26Z) - Learning Optimal Policy for Simultaneous Machine Translation via Binary
Search [17.802607889752736]
Simultaneous machine translation (SiMT) starts to output translation while reading the source sentence.
The policy determines the number of source tokens read during the translation of each target token.
We present a new method for constructing the optimal policy online via binary search.
arXiv Detail & Related papers (2023-05-22T07:03:06Z) - Exploring Human-Like Translation Strategy with Large Language Models [93.49333173279508]
Large language models (LLMs) have demonstrated impressive capabilities in general scenarios.
This work proposes the MAPS framework, which stands for Multi-Aspect Prompting and Selection.
We employ a selection mechanism based on quality estimation to filter out noisy and unhelpful knowledge.
arXiv Detail & Related papers (2023-05-06T19:03:12Z) - Gaussian Multi-head Attention for Simultaneous Machine Translation [21.03142288187605]
Simultaneous machine translation (SiMT) outputs translation while receiving the streaming source inputs.
We propose a new SiMT policy by modeling alignment and translation in a unified manner.
Experiments on En-Vi and De-En tasks show that our method outperforms strong baselines on the trade-off between translation and latency.
arXiv Detail & Related papers (2022-03-17T04:01:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.