Fine-Grained Emotional Paraphrasing along Emotion Gradients
- URL: http://arxiv.org/abs/2212.03297v1
- Date: Sun, 30 Oct 2022 05:38:22 GMT
- Title: Fine-Grained Emotional Paraphrasing along Emotion Gradients
- Authors: Justin Xie
- Abstract summary: We introduce a new task of fine-grained emotional paraphrasing along emotion gradients.
We propose a framework for addressing this task by fine-tuning text-to-text Transformers through multi-task training.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Paraphrase generation, a.k.a. paraphrasing, is a common and important task in
natural language processing. Emotional paraphrasing, which changes the emotion
embodied in a piece of text while preserving its meaning, has many potential
applications, e.g., moderating online dialogues and preventing cyberbullying.
We introduce a new task of fine-grained emotional paraphrasing along emotion
gradients, that is, altering the emotional intensities of the paraphrases in
fine grain following smooth variations in affective dimensions while preserving
the meanings of the originals. We propose a framework for addressing this task
by fine-tuning text-to-text Transformers through multi-task training. We
enhance several widely used paraphrasing corpus by annotating the input and
target texts with their fine-grained emotion labels. With these labels,
fine-tuning text-to-text Transformers on these corpus entails multi-task
training. Evaluations of the fine-tuned Transformers on separate test sets show
that including fine-grained emotion labels in the paraphrase task significantly
improve the chance of obtaining high-quality paraphrases of the desired
emotions, i.e., more than doubling the number of exact matches of desired
emotions while achieving consistently better scores in paraphrase metrics such
as BLEU, ROGUE, and METEOR.
Related papers
- Attention-based Interactive Disentangling Network for Instance-level
Emotional Voice Conversion [81.1492897350032]
Emotional Voice Conversion aims to manipulate a speech according to a given emotion while preserving non-emotion components.
We propose an Attention-based Interactive diseNtangling Network (AINN) that leverages instance-wise emotional knowledge for voice conversion.
arXiv Detail & Related papers (2023-12-29T08:06:45Z) - Emotion and Sentiment Guided Paraphrasing [3.5027291542274366]
We introduce a new task of fine-grained emotional paraphrasing along emotion gradients.
We reconstruct several widely used paraphrasing datasets by augmenting the input and target texts with their fine-grained emotion labels.
We propose a framework for emotion and sentiment guided paraphrasing by leveraging pre-trained language models for conditioned text generation.
arXiv Detail & Related papers (2023-06-08T20:59:40Z) - Experiencer-Specific Emotion and Appraisal Prediction [13.324006587838523]
Emotion classification in NLP assigns emotions to texts, such as sentences or paragraphs.
We focus on the experiencers of events, and assign an emotion (if any holds) to each of them.
Our experiencer-aware models of emotions and appraisals outperform the experiencer-agnostic baselines.
arXiv Detail & Related papers (2022-10-21T16:04:27Z) - CEFER: A Four Facets Framework based on Context and Emotion embedded
features for Implicit and Explicit Emotion Recognition [2.5137859989323537]
We propose a framework that analyses text at both the sentence and word levels.
We name it CEFER (Context and Emotion embedded Framework for Emotion Recognition)
CEFER combines the emotional vector of each word, including explicit and implicit emotions, with the feature vector of each word based on context.
arXiv Detail & Related papers (2022-09-28T11:16:32Z) - Accurate Emotion Strength Assessment for Seen and Unseen Speech Based on
Data-Driven Deep Learning [70.30713251031052]
We propose a data-driven deep learning model, i.e. StrengthNet, to improve the generalization of emotion strength assessment for seen and unseen speech.
Experiments show that the predicted emotion strength of the proposed StrengthNet is highly correlated with ground truth scores for both seen and unseen speech.
arXiv Detail & Related papers (2022-06-15T01:25:32Z) - EmoInHindi: A Multi-label Emotion and Intensity Annotated Dataset in
Hindi for Emotion Recognition in Dialogues [44.79509115642278]
We create a large conversational dataset in Hindi named EmoInHindi for multi-label emotion and intensity recognition in conversations.
We prepare our dataset in a Wizard-of-Oz manner for mental health and legal counselling of crime victims.
arXiv Detail & Related papers (2022-05-27T11:23:50Z) - Emotion Intensity and its Control for Emotional Voice Conversion [77.05097999561298]
Emotional voice conversion (EVC) seeks to convert the emotional state of an utterance while preserving the linguistic content and speaker identity.
In this paper, we aim to explicitly characterize and control the intensity of emotion.
We propose to disentangle the speaker style from linguistic content and encode the speaker style into a style embedding in a continuous space that forms the prototype of emotion embedding.
arXiv Detail & Related papers (2022-01-10T02:11:25Z) - Perspective-taking and Pragmatics for Generating Empathetic Responses
Focused on Emotion Causes [50.569762345799354]
We argue that two issues must be tackled at the same time: (i) identifying which word is the cause for the other's emotion from his or her utterance and (ii) reflecting those specific words in the response generation.
Taking inspiration from social cognition, we leverage a generative estimator to infer emotion cause words from utterances with no word-level label.
arXiv Detail & Related papers (2021-09-18T04:22:49Z) - Emotion-aware Chat Machine: Automatic Emotional Response Generation for
Human-like Emotional Interaction [55.47134146639492]
This article proposes a unifed end-to-end neural architecture, which is capable of simultaneously encoding the semantics and the emotions in a post.
Experiments on real-world data demonstrate that the proposed method outperforms the state-of-the-art methods in terms of both content coherence and emotion appropriateness.
arXiv Detail & Related papers (2021-06-06T06:26:15Z) - Seen and Unseen emotional style transfer for voice conversion with a new
emotional speech dataset [84.53659233967225]
Emotional voice conversion aims to transform emotional prosody in speech while preserving the linguistic content and speaker identity.
We propose a novel framework based on variational auto-encoding Wasserstein generative adversarial network (VAW-GAN)
We show that the proposed framework achieves remarkable performance by consistently outperforming the baseline framework.
arXiv Detail & Related papers (2020-10-28T07:16:18Z) - Challenges in Emotion Style Transfer: An Exploration with a Lexical
Substitution Pipeline [16.3589458084367]
We design a transparent emotion style transfer pipeline based on three steps.
We explore what cases lexical substitution can vary the emotional load of texts.
We find, indeed, that simultaneous adjustments of content and emotion are conflicting objectives.
arXiv Detail & Related papers (2020-05-15T16:11:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.