Trustworthy LLM-Mediated Communication: Evaluating Information Fidelity in LLM as a Communicator (LAAC) Framework in Multiple Application Domains
- URL: http://arxiv.org/abs/2511.04184v1
- Date: Thu, 06 Nov 2025 08:36:42 GMT
- Title: Trustworthy LLM-Mediated Communication: Evaluating Information Fidelity in LLM as a Communicator (LAAC) Framework in Multiple Application Domains
- Authors: Mohammed Musthafa Rafi, Adarsh Krishnamurthy, Aditya Balu,
- Abstract summary: This paper systematically evaluates the trustworthiness requirements for LAAC's deployment across multiple communication domains.<n>Preliminary findings reveal measurable trust gaps that must be addressed before LAAC can be reliably deployed in high-stakes communication scenarios.
- Score: 6.395778306248526
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The proliferation of AI-generated content has created an absurd communication theater where senders use LLMs to inflate simple ideas into verbose content, recipients use LLMs to compress them back into summaries, and as a consequence neither party engage with authentic content. LAAC (LLM as a Communicator) proposes a paradigm shift - positioning LLMs as intelligent communication intermediaries that capture the sender's intent through structured dialogue and facilitate genuine knowledge exchange with recipients. Rather than perpetuating cycles of AI-generated inflation and compression, LAAC enables authentic communication across diverse contexts including academic papers, proposals, professional emails, and cross-platform content generation. However, deploying LLMs as trusted communication intermediaries raises critical questions about information fidelity, consistency, and reliability. This position paper systematically evaluates the trustworthiness requirements for LAAC's deployment across multiple communication domains. We investigate three fundamental dimensions: (1) Information Capture Fidelity - accuracy of intent extraction during sender interviews across different communication types, (2) Reproducibility - consistency of structured knowledge across multiple interaction instances, and (3) Query Response Integrity - reliability of recipient-facing responses without hallucination, source conflation, or fabrication. Through controlled experiments spanning multiple LAAC use cases, we assess these trust dimensions using LAAC's multi-agent architecture. Preliminary findings reveal measurable trust gaps that must be addressed before LAAC can be reliably deployed in high-stakes communication scenarios.
Related papers
- Verification Required: The Impact of Information Credibility on AI Persuasion [13.454393198058398]
We introduce MixTalk, a strategic communication game for LLM-to-LLM interaction that models information credibility.<n>We evaluate state-of-the-art LLM agents in large-scale tournaments across three realistic deployment settings.<n>We propose Tournament Oracle Policy Distillation (TOPD), an offline method that distills tournament oracle policy from interaction logs and deploys it in-context at inference time.
arXiv Detail & Related papers (2026-02-01T02:22:28Z) - From Moderation to Mediation: Can LLMs Serve as Mediators in Online Flame Wars? [7.926773786209838]
Large language models (LLMs) have opened new possibilities for AI for good applications.<n>This work explores whether LLMs can serve as moderators that detect harmful content, but as mediators capable of understanding and de-escalating online conflicts.<n>Our framework decomposes mediation into two subtasks: judgment, where an LLM evaluates the fairness and emotional dynamics of a conversation, and steering, where it generates empathetic, de-escalatory messages.
arXiv Detail & Related papers (2025-12-02T18:31:18Z) - Communication and Verification in LLM Agents towards Collaboration under Information Asymmetry [17.472005826931127]
This paper studies Large Language Model (LLM) agents in task collaboration.<n>We extend Einstein Puzzles, a symbolic puzzle, to a table-top game.<n> Empirical results highlight the critical importance of aligned communication.
arXiv Detail & Related papers (2025-10-29T15:03:53Z) - In-Context Reinforcement Learning via Communicative World Models [49.00028802135605]
This work formulates ICRL as a two-agent emergent communication problem.<n>It introduces CORAL, a framework that learns a transferable communicative context.<n>Our experiments demonstrate that this approach enables the CA to achieve significant gains in sample efficiency.
arXiv Detail & Related papers (2025-08-08T19:23:23Z) - Aligning Large Language Models for Faithful Integrity Against Opposing Argument [71.33552795870544]
Large Language Models (LLMs) have demonstrated impressive capabilities in complex reasoning tasks.<n>They can be easily misled by unfaithful arguments during conversations, even when their original statements are correct.<n>We propose a novel framework, named Alignment for Faithful Integrity with Confidence Estimation.
arXiv Detail & Related papers (2025-01-02T16:38:21Z) - Context-aware Communication for Multi-agent Reinforcement Learning [6.109127175562235]
We develop a context-aware communication scheme for multi-agent reinforcement learning (MARL)
In the first stage, agents exchange coarse representations in a broadcast fashion, providing context for the second stage.
Following this, agents utilize attention mechanisms in the second stage to selectively generate messages personalized for the receivers.
To evaluate the effectiveness of CACOM, we integrate it with both actor-critic and value-based MARL algorithms.
arXiv Detail & Related papers (2023-12-25T03:33:08Z) - Large Language Model Enhanced Multi-Agent Systems for 6G Communications [94.45712802626794]
We propose a multi-agent system with customized communication knowledge and tools for solving communication related tasks using natural language.
We validate the effectiveness of the proposed multi-agent system by designing a semantic communication system.
arXiv Detail & Related papers (2023-12-13T02:35:57Z) - Let Models Speak Ciphers: Multiagent Debate through Embeddings [84.20336971784495]
We introduce CIPHER (Communicative Inter-Model Protocol Through Embedding Representation) to address this issue.
By deviating from natural language, CIPHER offers an advantage of encoding a broader spectrum of information without any modification to the model weights.
This showcases the superiority and robustness of embeddings as an alternative "language" for communication among LLMs.
arXiv Detail & Related papers (2023-10-10T03:06:38Z) - Reasoning over the Air: A Reasoning-based Implicit Semantic-Aware
Communication Framework [124.6509194665514]
A novel implicit semantic-aware communication (iSAC) architecture is proposed for representing, communicating, and interpreting the implicit semantic meaning between source and destination users.
A projection-based semantic encoder is proposed to convert the high-dimensional graphical representation of explicit semantics into a low-dimensional semantic constellation space for efficient physical channel transmission.
A generative adversarial imitation learning-based solution, called G-RML, is proposed to enable the destination user to learn and imitate the implicit semantic reasoning process of source user.
arXiv Detail & Related papers (2023-06-20T01:32:27Z) - Adversarial Learning for Implicit Semantic-Aware Communications [104.08383219177557]
We develop a novel adversarial learning-based implicit semantic-aware communication architecture (iSAC)
We prove that by applying iSAC, the destination user can always learn an inference rule that matches the true inference rule of the source messages.
Experimental results show that the proposed iSAC can offer up to a 19.69 dB improvement over existing non-inferential communication solutions.
arXiv Detail & Related papers (2023-01-27T08:28:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.