Do Stochastic Parrots have Feelings Too? Improving Neural Detection of
Synthetic Text via Emotion Recognition
- URL: http://arxiv.org/abs/2310.15904v1
- Date: Tue, 24 Oct 2023 15:07:35 GMT
- Title: Do Stochastic Parrots have Feelings Too? Improving Neural Detection of
Synthetic Text via Emotion Recognition
- Authors: Alan Cowap, Yvette Graham, Jennifer Foster
- Abstract summary: generative AI has shone a spotlight on high-performance synthetic text generation technologies.
Recent developments in generative AI have shone a spotlight on high-performance synthetic text generation technologies.
We draw inspiration from psychological studies which suggest that people can be driven by emotion and encode emotion in the text they compose.
- Score: 16.31088877974614
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent developments in generative AI have shone a spotlight on
high-performance synthetic text generation technologies. The now wide
availability and ease of use of such models highlights the urgent need to
provide equally powerful technologies capable of identifying synthetic text.
With this in mind, we draw inspiration from psychological studies which suggest
that people can be driven by emotion and encode emotion in the text they
compose. We hypothesize that pretrained language models (PLMs) have an
affective deficit because they lack such an emotional driver when generating
text and consequently may generate synthetic text which has affective
incoherence i.e. lacking the kind of emotional coherence present in
human-authored text. We subsequently develop an emotionally aware detector by
fine-tuning a PLM on emotion. Experiment results indicate that our
emotionally-aware detector achieves improvements across a range of synthetic
text generators, various sized models, datasets, and domains. Finally, we
compare our emotionally-aware synthetic text detector to ChatGPT in the task of
identification of its own output and show substantial gains, reinforcing the
potential of emotion as a signal to identify synthetic text. Code, models, and
datasets are available at https: //github.com/alanagiasi/emoPLMsynth
Related papers
- EmoSphere++: Emotion-Controllable Zero-Shot Text-to-Speech via Emotion-Adaptive Spherical Vector [26.656512860918262]
EmoSphere++ is an emotion-controllable zero-shot TTS model that can control emotional style and intensity to resemble natural human speech.
We introduce a novel emotion-adaptive spherical vector that models emotional style and intensity without human annotation.
We employ a conditional flow matching-based decoder to achieve high-quality and expressive emotional TTS in a few sampling steps.
arXiv Detail & Related papers (2024-11-04T21:33:56Z) - EmoKnob: Enhance Voice Cloning with Fine-Grained Emotion Control [7.596581158724187]
EmoKnob is a framework that allows fine-grained emotion control in speech synthesis with few-shot demonstrative samples of arbitrary emotion.
We show that our emotion control framework effectively embeds emotions into speech and surpasses emotion expressiveness of commercial TTS services.
arXiv Detail & Related papers (2024-10-01T01:29:54Z) - Emotional Dimension Control in Language Model-Based Text-to-Speech: Spanning a Broad Spectrum of Human Emotions [37.075331767703986]
Current emotional text-to-speech systems face challenges in mimicking a broad spectrum of human emotions.
This paper proposes a TTS framework that facilitates control over pleasure, arousal, and dominance.
It can synthesize a diversity of emotional styles without requiring any emotional speech data during TTS training.
arXiv Detail & Related papers (2024-09-25T07:16:16Z) - Emotion Rendering for Conversational Speech Synthesis with Heterogeneous
Graph-Based Context Modeling [50.99252242917458]
Conversational Speech Synthesis (CSS) aims to accurately express an utterance with the appropriate prosody and emotional inflection within a conversational setting.
To address the issue of data scarcity, we meticulously create emotional labels in terms of category and intensity.
Our model outperforms the baseline models in understanding and rendering emotions.
arXiv Detail & Related papers (2023-12-19T08:47:50Z) - The Good, The Bad, and Why: Unveiling Emotions in Generative AI [73.94035652867618]
We show that EmotionPrompt can boost the performance of AI models while EmotionAttack can hinder it.
EmotionDecode reveals that AI models can comprehend emotional stimuli akin to the mechanism of dopamine in the human brain.
arXiv Detail & Related papers (2023-12-18T11:19:45Z) - High-fidelity Generalized Emotional Talking Face Generation with
Multi-modal Emotion Space Learning [43.09015109281053]
We propose a more flexible and generalized framework for talking face generation.
Specifically, we supplement the emotion style in text prompts and use an Aligned Multi-modal Emotion encoder to embed the text, image, and audio emotion modality into a unified space.
An Emotion-aware Audio-to-3DMM Convertor is proposed to connect the emotion condition and the audio sequence to structural representation.
arXiv Detail & Related papers (2023-05-04T05:59:34Z) - EMOVIE: A Mandarin Emotion Speech Dataset with a Simple Emotional
Text-to-Speech Model [56.75775793011719]
We introduce and publicly release a Mandarin emotion speech dataset including 9,724 samples with audio files and its emotion human-labeled annotation.
Unlike those models which need additional reference audio as input, our model could predict emotion labels just from the input text and generate more expressive speech conditioned on the emotion embedding.
In the experiment phase, we first validate the effectiveness of our dataset by an emotion classification task. Then we train our model on the proposed dataset and conduct a series of subjective evaluations.
arXiv Detail & Related papers (2021-06-17T08:34:21Z) - Emotion-aware Chat Machine: Automatic Emotional Response Generation for
Human-like Emotional Interaction [55.47134146639492]
This article proposes a unifed end-to-end neural architecture, which is capable of simultaneously encoding the semantics and the emotions in a post.
Experiments on real-world data demonstrate that the proposed method outperforms the state-of-the-art methods in terms of both content coherence and emotion appropriateness.
arXiv Detail & Related papers (2021-06-06T06:26:15Z) - Enhancing Cognitive Models of Emotions with Representation Learning [58.2386408470585]
We present a novel deep learning-based framework to generate embedding representations of fine-grained emotions.
Our framework integrates a contextualized embedding encoder with a multi-head probing model.
Our model is evaluated on the Empathetic Dialogue dataset and shows the state-of-the-art result for classifying 32 emotions.
arXiv Detail & Related papers (2021-04-20T16:55:15Z) - Reinforcement Learning for Emotional Text-to-Speech Synthesis with
Improved Emotion Discriminability [82.39099867188547]
Emotional text-to-speech synthesis (ETTS) has seen much progress in recent years.
We propose a new interactive training paradigm for ETTS, denoted as i-ETTS.
We formulate an iterative training strategy with reinforcement learning to ensure the quality of i-ETTS optimization.
arXiv Detail & Related papers (2021-04-03T13:52:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.