So Different Yet So Alike! Constrained Unsupervised Text Style Transfer
- URL: http://arxiv.org/abs/2205.04093v1
- Date: Mon, 9 May 2022 07:46:40 GMT
- Title: So Different Yet So Alike! Constrained Unsupervised Text Style Transfer
- Authors: Abhinav Ramesh Kashyap, Devamanyu Hazarika, Min-Yen Kan, Roger
Zimmermann, Soujanya Poria
- Abstract summary: We introduce a method for constrained unsupervised text style transfer by introducing two complementary losses to the generative adversarial network (GAN) family of models.
Unlike the competing losses used in GANs, we introduce cooperative losses where the discriminator and the generator cooperate and reduce the same loss.
We show that the complementary cooperative losses improve text quality, according to both automated and human evaluation measures.
- Score: 54.4773992696361
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic transfer of text between domains has become popular in recent
times. One of its aims is to preserve the semantic content of text being
translated from source to target domain. However, it does not explicitly
maintain other attributes between the source and translated text, for e.g.,
text length and descriptiveness. Maintaining constraints in transfer has
several downstream applications, including data augmentation and de-biasing. We
introduce a method for such constrained unsupervised text style transfer by
introducing two complementary losses to the generative adversarial network
(GAN) family of models. Unlike the competing losses used in GANs, we introduce
cooperative losses where the discriminator and the generator cooperate and
reduce the same loss. The first is a contrastive loss and the second is a
classification loss, aiming to regularize the latent space further and bring
similar sentences across domains closer together. We demonstrate that such
training retains lexical, syntactic, and domain-specific constraints between
domains for multiple benchmark datasets, including ones where more than one
attribute change. We show that the complementary cooperative losses improve
text quality, according to both automated and human evaluation measures.
Related papers
- Domain-Agnostic Mutual Prompting for Unsupervised Domain Adaptation [27.695825570272874]
Conventional Unsupervised Domain Adaptation (UDA) strives to minimize distribution discrepancy between domains.
We propose Domain-Agnostic Mutual Prompting (DAMP) to exploit domain-invariant semantics.
Experiments on three UDA benchmarks demonstrate the superiority of DAMP over state-of-the-art approaches.
arXiv Detail & Related papers (2024-03-05T12:06:48Z) - A Novel Estimator of Mutual Information for Learning to Disentangle
Textual Representations [27.129551973093008]
This paper introduces a novel variational upper bound to the mutual information between an attribute and the latent code of an encoder.
It aims at controlling the approximation error via the Renyi's divergence, leading to both better disentangled representations and a precise control of the desirable degree of disentanglement.
We show the superiority of this method on fair classification and on textual style transfer tasks.
arXiv Detail & Related papers (2021-05-06T14:05:06Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - DINO: A Conditional Energy-Based GAN for Domain Translation [67.9879720396872]
Domain translation is the process of transforming data from one domain to another while preserving the common semantics.
Some of the most popular domain translation systems are based on conditional generative adversarial networks.
We propose a new framework, where two networks are simultaneously trained, in a supervised manner, to perform domain translation in opposite directions.
arXiv Detail & Related papers (2021-02-18T11:52:45Z) - Simultaneous Semantic Alignment Network for Heterogeneous Domain
Adaptation [67.37606333193357]
We propose aSimultaneous Semantic Alignment Network (SSAN) to simultaneously exploit correlations among categories and align the centroids for each category across domains.
By leveraging target pseudo-labels, a robust triplet-centroid alignment mechanism is explicitly applied to align feature representations for each category.
Experiments on various HDA tasks across text-to-image, image-to-image and text-to-text successfully validate the superiority of our SSAN against state-of-the-art HDA methods.
arXiv Detail & Related papers (2020-08-04T16:20:37Z) - Domain Adaptation for Semantic Parsing [68.81787666086554]
We propose a novel semantic for domain adaptation, where we have much fewer annotated data in the target domain compared to the source domain.
Our semantic benefits from a two-stage coarse-to-fine framework, thus can provide different and accurate treatments for the two stages.
Experiments on a benchmark dataset show that our method consistently outperforms several popular domain adaptation strategies.
arXiv Detail & Related papers (2020-06-23T14:47:41Z) - Differential Treatment for Stuff and Things: A Simple Unsupervised
Domain Adaptation Method for Semantic Segmentation [105.96860932833759]
State-of-the-art approaches prove that performing semantic-level alignment is helpful in tackling the domain shift issue.
We propose to improve the semantic-level alignment with different strategies for stuff regions and for things.
In addition to our proposed method, we show that our method can help ease this issue by minimizing the most similar stuff and instance features between the source and the target domains.
arXiv Detail & Related papers (2020-03-18T04:43:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.