Fine-grained Sentiment Controlled Text Generation
- URL: http://arxiv.org/abs/2006.09891v1
- Date: Wed, 17 Jun 2020 14:17:58 GMT
- Title: Fine-grained Sentiment Controlled Text Generation
- Authors: Bidisha Samanta, Mohit Agarwal, Niloy Ganguly
- Abstract summary: Controlled text generation techniques aim to regulate specific attributes while preserving the attribute independent content.
We propose DE-VAE, a hierarchical framework which captures both information enriched entangled representation and attribute specific disentangled representation.
- Score: 28.20006438705556
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Controlled text generation techniques aim to regulate specific attributes
(e.g. sentiment) while preserving the attribute independent content. The
state-of-the-art approaches model the specified attribute as a structured or
discrete representation while making the content representation independent of
it to achieve a better control. However, disentangling the text representation
into separate latent spaces overlooks complex dependencies between content and
attribute, leading to generation of poorly constructed and not so meaningful
sentences. Moreover, such an approach fails to provide a finer control on the
degree of attribute change. To address these problems of controlled text
generation, in this paper, we propose DE-VAE, a hierarchical framework which
captures both information enriched entangled representation and attribute
specific disentangled representation in different hierarchies. DE-VAE achieves
better control of sentiment as an attribute while preserving the content by
learning a suitable lossless transformation network from the disentangled
sentiment space to the desired entangled representation. Through feature
supervision on a single dimension of the disentangled representation, DE-VAE
maps the variation of sentiment to a continuous space which helps in smoothly
regulating sentiment from positive to negative and vice versa. Detailed
experiments on three publicly available review datasets show the superiority of
DE-VAE over recent state-of-the-art approaches.
Related papers
- Continuous, Subject-Specific Attribute Control in T2I Models by Identifying Semantic Directions [21.371773126590874]
We show that there exist directions in the commonly used token-level CLIP text embeddings that enable fine-grained subject-specific control of high-level attributes in text-to-image models.
We introduce one efficient optimization-free and one robust optimization-based method to identify these directions for specific attributes from contrastive text prompts.
arXiv Detail & Related papers (2024-03-25T18:00:42Z) - Text Attribute Control via Closed-Loop Disentanglement [72.2786244367634]
We propose a novel approach to achieve a robust control of attributes while enhancing content preservation.
In this paper, we use a semi-supervised contrastive learning method to encourage the disentanglement of attributes in latent spaces.
We conducted experiments on three text datasets, including the Yelp Service review dataset, the Amazon Product review dataset, and the GoEmotions dataset.
arXiv Detail & Related papers (2023-12-01T01:26:38Z) - Air-Decoding: Attribute Distribution Reconstruction for Decoding-Time
Controllable Text Generation [58.911255139171075]
Controllable text generation (CTG) aims to generate text with desired attributes.
We propose a novel lightweight decoding framework named Air-Decoding.
Our method achieves a new state-of-the-art control performance.
arXiv Detail & Related papers (2023-10-23T12:59:11Z) - A Distributional Lens for Multi-Aspect Controllable Text Generation [17.97374410245602]
Multi-aspect controllable text generation is a more challenging and practical task than single-aspect control.
Existing methods achieve complex multi-aspect control by fusing multiple controllers learned from single-aspect.
We propose to directly search for the intersection areas of multiple attribute distributions as their combination for generation.
arXiv Detail & Related papers (2022-10-06T13:08:04Z) - TransFA: Transformer-based Representation for Face Attribute Evaluation [87.09529826340304]
We propose a novel textbftransformer-based representation for textbfattribute evaluation method (textbfTransFA)
The proposed TransFA achieves superior performances compared with state-of-the-art methods.
arXiv Detail & Related papers (2022-07-12T10:58:06Z) - Controllable Dialogue Generation with Disentangled Multi-grained Style
Specification and Attribute Consistency Reward [47.96949534259019]
We propose a controllable dialogue generation model to steer response generation under multi-attribute constraints.
We categorize the commonly used control attributes into global and local ones, which possess different granularities of effects on response generation.
Our model can significantly outperform competitive baselines in terms of response quality, content diversity and controllability.
arXiv Detail & Related papers (2021-09-14T14:29:38Z) - Towards Disentangling Latent Space for Unsupervised Semantic Face
Editing [21.190437168936764]
Supervised attribute editing requires annotated training data which is difficult to obtain and limits the editable attributes to those with labels.
In this paper, we present a new technique termed Structure-Texture Independent Architecture with Weight Decomposition and Orthogonal Regularization (STIA-WO) to disentangle the latent space for unsupervised semantic face editing.
arXiv Detail & Related papers (2020-11-05T03:29:24Z) - Improving Disentangled Text Representation Learning with
Information-Theoretic Guidance [99.68851329919858]
discrete nature of natural language makes disentangling of textual representations more challenging.
Inspired by information theory, we propose a novel method that effectively manifests disentangled representations of text.
Experiments on both conditional text generation and text-style transfer demonstrate the high quality of our disentangled representation.
arXiv Detail & Related papers (2020-06-01T03:36:01Z) - Attribute-based Regularization of Latent Spaces for Variational
Auto-Encoders [79.68916470119743]
We present a novel method to structure the latent space of a Variational Auto-Encoder (VAE) to encode different continuous-valued attributes explicitly.
This is accomplished by using an attribute regularization loss which enforces a monotonic relationship between the attribute values and the latent code of the dimension along which the attribute is to be encoded.
arXiv Detail & Related papers (2020-04-11T20:53:13Z) - Heavy-tailed Representations, Text Polarity Classification & Data
Augmentation [11.624944730002298]
We develop a novel method to learn a heavy-tailed embedding with desirable regularity properties.
A classifier dedicated to the tails of the proposed embedding is obtained which performance outperforms the baseline.
Numerical experiments on synthetic and real text data demonstrate the relevance of the proposed framework.
arXiv Detail & Related papers (2020-03-25T19:24:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.