Text Attribute Control via Closed-Loop Disentanglement
- URL: http://arxiv.org/abs/2312.00277v1
- Date: Fri, 1 Dec 2023 01:26:38 GMT
- Title: Text Attribute Control via Closed-Loop Disentanglement
- Authors: Lei Sha, Thomas Lukasiewicz
- Abstract summary: We propose a novel approach to achieve a robust control of attributes while enhancing content preservation.
In this paper, we use a semi-supervised contrastive learning method to encourage the disentanglement of attributes in latent spaces.
We conducted experiments on three text datasets, including the Yelp Service review dataset, the Amazon Product review dataset, and the GoEmotions dataset.
- Score: 72.2786244367634
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Changing an attribute of a text without changing the content usually requires
to first disentangle the text into irrelevant attributes and content
representations. After that, in the inference phase, the representation of one
attribute is tuned to a different value, expecting that the corresponding
attribute of the text can also be changed accordingly. The usual way of
disentanglement is to add some constraints on the latent space of an
encoder-decoder architecture, including adversarial-based constraints and
mutual-information-based constraints. However, the previous semi-supervised
processes of attribute change are usually not enough to guarantee the success
of attribute change and content preservation. In this paper, we propose a novel
approach to achieve a robust control of attributes while enhancing content
preservation. In this approach, we use a semi-supervised contrastive learning
method to encourage the disentanglement of attributes in latent spaces.
Differently from previous works, we re-disentangle the reconstructed sentence
and compare the re-disentangled latent space with the original latent space,
which makes a closed-loop disentanglement process. This also helps content
preservation. In addition, the contrastive learning method is also able to
replace the role of minimizing mutual information and adversarial training in
the disentanglement process, which alleviates the computation cost. We
conducted experiments on three text datasets, including the Yelp Service review
dataset, the Amazon Product review dataset, and the GoEmotions dataset. The
experimental results show the effectiveness of our model.
Related papers
- Air-Decoding: Attribute Distribution Reconstruction for Decoding-Time
Controllable Text Generation [58.911255139171075]
Controllable text generation (CTG) aims to generate text with desired attributes.
We propose a novel lightweight decoding framework named Air-Decoding.
Our method achieves a new state-of-the-art control performance.
arXiv Detail & Related papers (2023-10-23T12:59:11Z) - Learning Conditional Invariance through Cycle Consistency [60.85059977904014]
We propose a novel approach to identify meaningful and independent factors of variation in a dataset.
Our method involves two separate latent subspaces for the target property and the remaining input information.
We demonstrate on synthetic and molecular data that our approach identifies more meaningful factors which lead to sparser and more interpretable models.
arXiv Detail & Related papers (2021-11-25T17:33:12Z) - Disentangled Face Attribute Editing via Instance-Aware Latent Space
Search [30.17338705964925]
A rich set of semantic directions exist in the latent space of Generative Adversarial Networks (GANs)
Existing methods may suffer poor attribute variation disentanglement, leading to unwanted change of other attributes when altering the desired one.
We propose a novel framework (IALS) that performs Instance-Aware Latent-Space Search to find semantic directions for disentangled attribute editing.
arXiv Detail & Related papers (2021-05-26T16:19:08Z) - Fine-grained Sentiment Controlled Text Generation [28.20006438705556]
Controlled text generation techniques aim to regulate specific attributes while preserving the attribute independent content.
We propose DE-VAE, a hierarchical framework which captures both information enriched entangled representation and attribute specific disentangled representation.
arXiv Detail & Related papers (2020-06-17T14:17:58Z) - Learning to Manipulate Individual Objects in an Image [71.55005356240761]
We describe a method to train a generative model with latent factors that are independent and localized.
This means that perturbing the latent variables affects only local regions of the synthesized image, corresponding to objects.
Unlike other unsupervised generative models, ours enables object-centric manipulation, without requiring object-level annotations.
arXiv Detail & Related papers (2020-04-11T21:50:20Z) - Attribute-based Regularization of Latent Spaces for Variational
Auto-Encoders [79.68916470119743]
We present a novel method to structure the latent space of a Variational Auto-Encoder (VAE) to encode different continuous-valued attributes explicitly.
This is accomplished by using an attribute regularization loss which enforces a monotonic relationship between the attribute values and the latent code of the dimension along which the attribute is to be encoded.
arXiv Detail & Related papers (2020-04-11T20:53:13Z) - Learning to Select Bi-Aspect Information for Document-Scale Text Content
Manipulation [50.01708049531156]
We focus on a new practical task, document-scale text content manipulation, which is the opposite of text style transfer.
In detail, the input is a set of structured records and a reference text for describing another recordset.
The output is a summary that accurately describes the partial content in the source recordset with the same writing style of the reference.
arXiv Detail & Related papers (2020-02-24T12:52:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.