Translation in the Hands of Many:Centering Lay Users in Machine Translation Interactions
- URL: http://arxiv.org/abs/2502.13780v1
- Date: Wed, 19 Feb 2025 14:45:17 GMT
- Title: Translation in the Hands of Many:Centering Lay Users in Machine Translation Interactions
- Authors: Beatrice Savoldi, Alan Ramponi, Matteo Negri, Luisa Bentivogli,
- Abstract summary: Machine Translation (MT) has become a global tool, with cross-lingual services now also supported by dialogue systems powered by multilingual Large Language Models (LLMs)<n>This paper traces the shift in MT user profiles, focusing on non-expert users.<n>We identify three key factors -- usability, trust, and literacy -- that shape these interactions and must be addressed to align MT with user needs.
- Score: 17.694939962332914
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Converging societal and technical factors have transformed language technologies into user-facing applications employed across languages. Machine Translation (MT) has become a global tool, with cross-lingual services now also supported by dialogue systems powered by multilingual Large Language Models (LLMs). This accessibility has expanded MT's reach to a vast base of lay users, often with little to no expertise in the languages or the technology itself. Despite this, the understanding of MT consumed by this diverse group of users -- their needs, experiences, and interactions with these systems -- remains limited. This paper traces the shift in MT user profiles, focusing on non-expert users and how their engagement with these systems may change with LLMs. We identify three key factors -- usability, trust, and literacy -- that shape these interactions and must be addressed to align MT with user needs. By exploring these dimensions, we offer insights to guide future MT with a user-centered approach.
Related papers
- An Interdisciplinary Approach to Human-Centered Machine Translation [67.70453480427132]
Machine Translation (MT) tools are widely used today, often in contexts where professional translators are not present.<n>Despite progress in MT technology, a gap persists between system development and real-world usage.<n>This paper advocates for a human-centered approach to MT, emphasizing the alignment of system design with diverse communicative goals.
arXiv Detail & Related papers (2025-06-16T13:27:44Z) - TULUN: Transparent and Adaptable Low-resource Machine Translation [30.705550819100424]
Tulun is a versatile solution for terminology-aware translation.<n>Our open-source web-based platform enables users to easily create, edit, and leverage terminology resources.
arXiv Detail & Related papers (2025-05-24T12:58:58Z) - Evaluating Multimodal Language Models as Visual Assistants for Visually Impaired Users [42.132487737233845]
This paper explores the effectiveness of Multimodal Large Language models (MLLMs) as assistive technologies for visually impaired individuals.
We conduct a user survey to identify adoption patterns and key challenges users face with such technologies.
arXiv Detail & Related papers (2025-03-28T16:54:25Z) - MT-LENS: An all-in-one Toolkit for Better Machine Translation Evaluation [1.7775825387442485]
MT-LENS is a framework designed to evaluate Machine Translation (MT) systems across a variety of tasks.<n>It offers a user-friendly platform to compare systems and analyze translations with interactive visualizations.
arXiv Detail & Related papers (2024-12-16T09:57:28Z) - Training Zero-Shot Generalizable End-to-End Task-Oriented Dialog System Without Turn-level Dialog Annotations [2.757798192967912]
This work employs multi-task instruction fine-tuning to create more efficient and scalable task-oriented dialogue systems.
Our approach outperforms both state-of-the-art models trained on annotated data and billion-scale parameter off-the-shelf ChatGPT models.
arXiv Detail & Related papers (2024-07-21T04:52:38Z) - Only Send What You Need: Learning to Communicate Efficiently in Federated Multilingual Machine Translation [17.159005029204092]
This paper focuses on a practical federated multilingual learning setup where clients with their own language-specific data aim to collaboratively construct a high-quality neural machine translation (NMT) model.
We propose a meta-learning-based adaptive parameter selection methodology, MetaSend, that improves the communication efficiency of model transmissions from clients during FL-based multilingual NMT training.
arXiv Detail & Related papers (2024-01-15T04:04:26Z) - DIALIGHT: Lightweight Multilingual Development and Evaluation of
Task-Oriented Dialogue Systems with Large Language Models [76.79929883963275]
DIALIGHT is a toolkit for developing and evaluating multilingual Task-Oriented Dialogue (ToD) systems.
It features a secure, user-friendly web interface for fine-grained human evaluation at both local utterance level and global dialogue level.
Our evaluations reveal that while PLM fine-tuning leads to higher accuracy and coherence, LLM-based systems excel in producing diverse and likeable responses.
arXiv Detail & Related papers (2024-01-04T11:27:48Z) - IMTLab: An Open-Source Platform for Building, Evaluating, and Diagnosing
Interactive Machine Translation Systems [94.39110258587887]
We present IMTLab, an open-source end-to-end interactive machine translation (IMT) system platform.
IMTLab treats the whole interactive translation process as a task-oriented dialogue with a human-in-the-loop setting.
arXiv Detail & Related papers (2023-10-17T11:29:04Z) - Neural Machine Translation for the Indigenous Languages of the Americas:
An Introduction [102.13536517783837]
Most languages from the Americas are among them, having a limited amount of parallel and monolingual data, if any.
We discuss the recent advances and findings and open questions, product of an increased interest of the NLP community in these languages.
arXiv Detail & Related papers (2023-06-11T23:27:47Z) - A Paradigm Shift: The Future of Machine Translation Lies with Large Language Models [55.42263732351375]
Machine Translation has greatly advanced over the years due to the developments in deep neural networks.
The emergence of Large Language Models (LLMs) like GPT-4 and ChatGPT is introducing a new phase in the MT domain.
We highlight several new MT directions, emphasizing the benefits of LLMs in scenarios such as Long-Document Translation, Stylized Translation, and Interactive Translation.
arXiv Detail & Related papers (2023-05-02T03:27:27Z) - LVP-M3: Language-aware Visual Prompt for Multilingual Multimodal Machine
Translation [94.33019040320507]
Multimodal Machine Translation (MMT) focuses on enhancing text-only translation with visual features.
Recent advances still struggle to train a separate model for each language pair, which is costly and unaffordable when the number of languages increases.
We propose the Multilingual MMT task by establishing two new Multilingual MMT benchmark datasets covering seven languages.
arXiv Detail & Related papers (2022-10-19T12:21:39Z) - Beyond General Purpose Machine Translation: The Need for
Context-specific Empirical Research to Design for Appropriate User Trust [8.539683760001573]
We discuss research directions to support users to calibrate trust in Machine Translation systems.
Based on our findings, we advocate for empirical research on how MT systems are used in practice.
arXiv Detail & Related papers (2022-05-13T23:04:22Z) - BERTuit: Understanding Spanish language in Twitter through a native
transformer [70.77033762320572]
We present bfBERTuit, the larger transformer proposed so far for Spanish language, pre-trained on a massive dataset of 230M Spanish tweets.
Our motivation is to provide a powerful resource to better understand Spanish Twitter and to be used on applications focused on this social network.
arXiv Detail & Related papers (2022-04-07T14:28:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.