Language of Persuasion and Misrepresentation in Business Communication: A Textual Detection Approach
- URL: http://arxiv.org/abs/2508.09935v1
- Date: Wed, 13 Aug 2025 16:38:31 GMT
- Title: Language of Persuasion and Misrepresentation in Business Communication: A Textual Detection Approach
- Authors: Sayem Hossen, Monalisa Moon Joti, Md. Golam Rashed,
- Abstract summary: Business communication digitisation has reorganised the process of persuasive discourse.<n>This inquiry synthesises classical rhetoric and communication psychology with linguistic theory and empirical studies.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Business communication digitisation has reorganised the process of persuasive discourse, which allows not only greater transparency but also advanced deception. This inquiry synthesises classical rhetoric and communication psychology with linguistic theory and empirical studies in the financial reporting, sustainability discourse, and digital marketing to explain how deceptive language can be systematically detected using persuasive lexicon. In controlled settings, detection accuracies of greater than 99% were achieved by using computational textual analysis as well as personalised transformer models. However, reproducing this performance in multilingual settings is also problematic and, to a large extent, this is because it is not easy to find sufficient data, and because few multilingual text-processing infrastructures are in place. This evidence shows that there has been an increasing gap between the theoretical representations of communication and those empirically approximated, and therefore, there is a need to have strong automatic text-identification systems where AI-based discourse is becoming more realistic in communicating with humans.
Related papers
- MT-PingEval: Evaluating Multi-Turn Collaboration with Private Information Games [70.37904949359938]
We evaluate language models in multi-turn interactions using a suite of collaborative games that require effective communication about private information.<n>We find that language models are unable to use interactive collaboration to improve over the non-interactive baseline scenario.<n>We analyze the linguistic features of these dialogues, assessing the roles of sycophancy, information density, and discourse coherence.
arXiv Detail & Related papers (2026-02-27T17:13:20Z) - Detecting Mental Manipulation in Speech via Synthetic Multi-Speaker Dialogue [12.181747090385612]
Mental manipulation is the strategic use of language to covertly influence or exploit others.<n>We present the first study of mental manipulation detection in spoken dialogues.<n>Using few-shot large audio-language models and human annotation, we evaluate how modality affects detection accuracy and perception.
arXiv Detail & Related papers (2026-01-13T09:02:08Z) - Towards Inclusive Communication: A Unified Framework for Generating Spoken Language from Sign, Lip, and Audio [52.859261069569165]
We propose the first unified framework capable of handling diverse combinations of sign language, lip movements, and audio for spoken-language text generation.<n>We focus on three main objectives: (i) designing a unified, modality-agnostic architecture capable of effectively processing heterogeneous inputs; (ii) exploring the underexamined synergy among modalities, particularly the role of lip movements as non-manual cues in sign language comprehension; and (iii) achieving performance on par with or better than state-of-the-art models specialized for individual tasks.
arXiv Detail & Related papers (2025-08-28T06:51:42Z) - Linguistic Knowledge Transfer Learning for Speech Enhancement [29.191204225828354]
Linguistic knowledge plays a crucial role in spoken language comprehension.<n>Most speech enhancement methods rely on acoustic features to learn the mapping relationship between noisy and clean speech.<n>We propose the Cross-Modality Knowledge Transfer (CMKT) learning framework to integrate linguistic knowledge into SE models.
arXiv Detail & Related papers (2025-03-10T09:00:18Z) - Communication is All You Need: Persuasion Dataset Construction via Multi-LLM Communication [21.041517755843977]
Large Language Models (LLMs) have shown proficiency in generating persuasive dialogue, yet concerns about the fluency and sophistication of their outputs persist.<n>This paper presents a multi-LLM communication framework designed to enhance the generation of persuasive data automatically.
arXiv Detail & Related papers (2025-02-13T02:22:48Z) - How "Real" is Your Real-Time Simultaneous Speech-to-Text Translation System? [7.252894835396412]
Simultaneous speech-to-text translation (SimulST) translates source-language speech into target-language text concurrently with the speaker's speech, ensuring low latency for better user comprehension.<n>Despite its intended application to unbounded speech, most research has focused on human pre-segmented speech, simplifying the task and overlooking significant challenges.
arXiv Detail & Related papers (2024-12-24T15:26:31Z) - Enhancing expressivity transfer in textless speech-to-speech translation [0.0]
Existing state-of-the-art systems fall short when it comes to capturing and transferring expressivity accurately across different languages.
This study presents a novel method that operates at the discrete speech unit level and leverages multilingual emotion embeddings.
We demonstrate how these embeddings can be used to effectively predict the pitch and duration of speech units in the target language.
arXiv Detail & Related papers (2023-10-11T08:07:22Z) - Cognitive Semantic Communication Systems Driven by Knowledge Graph:
Principle, Implementation, and Performance Evaluation [74.38561925376996]
Two cognitive semantic communication frameworks are proposed for the single-user and multiple-user communication scenarios.
An effective semantic correction algorithm is proposed by mining the inference rule from the knowledge graph.
For the multi-user cognitive semantic communication system, a message recovery algorithm is proposed to distinguish messages of different users.
arXiv Detail & Related papers (2023-03-15T12:01:43Z) - Less Data, More Knowledge: Building Next Generation Semantic
Communication Networks [180.82142885410238]
We present the first rigorous vision of a scalable end-to-end semantic communication network.
We first discuss how the design of semantic communication networks requires a move from data-driven networks towards knowledge-driven ones.
By using semantic representation and languages, we show that the traditional transmitter and receiver now become a teacher and apprentice.
arXiv Detail & Related papers (2022-11-25T19:03:25Z) - On Reality and the Limits of Language Data: Aligning LLMs with Human
Norms [10.02997544238235]
Large Language Models (LLMs) harness linguistic associations in vast natural language data for practical applications.
We explore this question using a novel and tightly controlled reasoning test (ART) and compare human norms against versions of GPT-3.
Our findings highlight the categories of common-sense relations models that could learn directly from data and areas of weakness.
arXiv Detail & Related papers (2022-08-25T10:21:23Z) - Color Overmodification Emerges from Data-Driven Learning and Pragmatic
Reasoning [53.088796874029974]
We show that speakers' referential expressions depart from communicative ideals in ways that help illuminate the nature of pragmatic language use.
By adopting neural networks as learning agents, we show that overmodification is more likely with environmental features that are infrequent or salient.
arXiv Detail & Related papers (2022-05-18T18:42:43Z) - Leveraging Pre-trained Language Model for Speech Sentiment Analysis [58.78839114092951]
We explore the use of pre-trained language models to learn sentiment information of written texts for speech sentiment analysis.
We propose a pseudo label-based semi-supervised training strategy using a language model on an end-to-end speech sentiment approach.
arXiv Detail & Related papers (2021-06-11T20:15:21Z) - Improving Machine Reading Comprehension with Contextualized Commonsense
Knowledge [62.46091695615262]
We aim to extract commonsense knowledge to improve machine reading comprehension.
We propose to represent relations implicitly by situating structured knowledge in a context.
We employ a teacher-student paradigm to inject multiple types of contextualized knowledge into a student machine reader.
arXiv Detail & Related papers (2020-09-12T17:20:01Z) - Experience Grounds Language [185.73483760454454]
Language understanding research is held back by a failure to relate language to the physical world it describes and to the social interactions it facilitates.
Despite the incredible effectiveness of language processing models to tackle tasks after being trained on text alone, successful linguistic communication relies on a shared experience of the world.
arXiv Detail & Related papers (2020-04-21T16:56:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.