Toward Machine Interpreting: Lessons from Human Interpreting Studies
- URL: http://arxiv.org/abs/2508.07964v1
- Date: Mon, 11 Aug 2025 13:20:33 GMT
- Title: Toward Machine Interpreting: Lessons from Human Interpreting Studies
- Authors: Matthias Sperber, Maureen de Seyssel, Jiajun Bao, Matthias Paulik,
- Abstract summary: We argue that there is great potential to adopt many human interpreting principles using recent modeling techniques.<n>We hope that our findings provide inspiration for closing the perceived usability gap, and can motivate progress toward true machine interpreting.
- Score: 8.356119129161796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current speech translation systems, while having achieved impressive accuracies, are rather static in their behavior and do not adapt to real-world situations in ways human interpreters do. In order to improve their practical usefulness and enable interpreting-like experiences, a precise understanding of the nature of human interpreting is crucial. To this end, we discuss human interpreting literature from the perspective of the machine translation field, while considering both operational and qualitative aspects. We identify implications for the development of speech translation systems and argue that there is great potential to adopt many human interpreting principles using recent modeling techniques. We hope that our findings provide inspiration for closing the perceived usability gap, and can motivate progress toward true machine interpreting.
Related papers
- Vision-Grounded Machine Interpreting: Improving the Translation Process through Visual Cues [0.0]
Vision-Grounded Interpreting (VGI) is a novel approach designed to address the limitations of unimodal machine interpreting.<n>We present a prototype system that integrates a vision-language model to process both speech and visual input from a webcam.<n>To evaluate the effectiveness of this approach, we constructed a hand-crafted diagnostic corpus targeting three types of ambiguity.
arXiv Detail & Related papers (2025-09-28T16:25:33Z) - On the Same Wavelength? Evaluating Pragmatic Reasoning in Language Models across Broad Concepts [69.69818198773244]
We study a range of LMs on both language comprehension and language production.<n>We find that state-of-the-art LMs, but not smaller ones, achieve strong performance on language comprehension.
arXiv Detail & Related papers (2025-09-08T17:59:32Z) - Human-like conceptual representations emerge from language prediction [72.5875173689788]
Large language models (LLMs) trained exclusively through next-token prediction over language data exhibit remarkably human-like behaviors.<n>Are these models developing concepts akin to humans, and if so, how are such concepts represented and organized?<n>Our results demonstrate that LLMs can flexibly derive concepts from linguistic descriptions in relation to contextual cues about other concepts.<n>These findings establish that structured, human-like conceptual representations can naturally emerge from language prediction without real-world grounding.
arXiv Detail & Related papers (2025-01-21T23:54:17Z) - Machines of Meaning [0.0]
We discuss the challenges in the specification of "machines of meaning"<n>We highlight the need for detachment from anthropocentrism in the study of machines of meaning.<n>We propose a view of "meaning" to facilitate the discourse around approaches such as neural language models.
arXiv Detail & Related papers (2024-12-10T23:23:28Z) - Situated Instruction Following [87.37244711380411]
We propose situated instruction following, which embraces the inherent underspecification and ambiguity of real-world communication.
The meaning of situated instructions naturally unfold through the past actions and the expected future behaviors of the human involved.
Our experiments indicate that state-of-the-art Embodied Instruction Following (EIF) models lack holistic understanding of situated human intention.
arXiv Detail & Related papers (2024-07-15T19:32:30Z) - Analysis of the Evolution of Advanced Transformer-Based Language Models:
Experiments on Opinion Mining [0.5735035463793008]
This paper studies the behaviour of the cutting-edge Transformer-based language models on opinion mining.
Our comparative study shows leads and paves the way for production engineers regarding the approach to focus on.
arXiv Detail & Related papers (2023-08-07T01:10:50Z) - SenteCon: Leveraging Lexicons to Learn Human-Interpretable Language
Representations [51.08119762844217]
SenteCon is a method for introducing human interpretability in deep language representations.
We show that SenteCon provides high-level interpretability at little to no cost to predictive performance on downstream tasks.
arXiv Detail & Related papers (2023-05-24T05:06:28Z) - Is it possible not to cheat on the Turing Test: Exploring the potential
and challenges for true natural language 'understanding' by computers [0.0]
The area of natural language understanding in artificial intelligence claims to have been making great strides.
A comprehensive, interdisciplinary overview of current approaches and remaining challenges is yet to be carried out.
I unite all of these perspectives to unpack the challenges involved in reaching true (human-like) language understanding.
arXiv Detail & Related papers (2022-06-29T14:19:48Z) - Testing the Ability of Language Models to Interpret Figurative Language [69.59943454934799]
Figurative and metaphorical language are commonplace in discourse.
It remains an open question to what extent modern language models can interpret nonliteral phrases.
We introduce Fig-QA, a Winograd-style nonliteral language understanding task.
arXiv Detail & Related papers (2022-04-26T23:42:22Z) - Interpretable Deep Learning: Interpretations, Interpretability,
Trustworthiness, and Beyond [49.93153180169685]
We introduce and clarify two basic concepts-interpretations and interpretability-that people usually get confused.
We elaborate the design of several recent interpretation algorithms, from different perspectives, through proposing a new taxonomy.
We summarize the existing work in evaluating models' interpretability using "trustworthy" interpretation algorithms.
arXiv Detail & Related papers (2021-03-19T08:40:30Z) - Machine Semiotics [0.0]
For speech assistive devices, the learning of machine-specific meanings of human utterances appears to be sufficient.
Using the quite trivial example of a cognitive heating device, we show that this process can be formalized as the reinforcement learning of utterance-meaning pairs (UMP)
arXiv Detail & Related papers (2020-08-24T15:49:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.