Semantic Space Grounded Weighted Decoding for Multi-Attribute
Controllable Dialogue Generation
- URL: http://arxiv.org/abs/2305.02820v2
- Date: Sun, 5 Nov 2023 10:35:02 GMT
- Title: Semantic Space Grounded Weighted Decoding for Multi-Attribute
Controllable Dialogue Generation
- Authors: Zhiling Zhang and Mengyue Wu and Kenny Q. Zhu
- Abstract summary: We propose a novel framework called DASC that possesses strong controllability with a weighted decoding paradigm.
Generation with multiple attributes is then intuitively implemented with an utterance of multiple attribute embeddings.
Experiments show that DASC can achieve high control accuracy in generation task with the simultaneous control of 3 aspects.
- Score: 41.23970507903113
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Controlling chatbot utterance generation with multiple attributes such as
personalities, emotions and dialogue acts is a practically useful but
under-studied problem. We propose a novel framework called DASC that possesses
strong controllability with a weighted decoding paradigm, while improving
generation quality with the grounding in an attribute semantics space.
Generation with multiple attributes is then intuitively implemented with an
interpolation of multiple attribute embeddings, which results in substantial
reduction in the model sizes. Experiments show that DASC can achieve high
control accuracy in generation task with the simultaneous control of 3 aspects
while also producing interesting and reasonably sensible responses, even in an
out-of-distribution robustness test.
Related papers
- Multi-Aspect Controllable Text Generation with Disentangled Counterfactual Augmentation [20.15822422715231]
Multi-aspect controllable text generation aims to control the generated texts in attributes from multiple aspects.
We propose MAGIC, a new multi-aspect controllable text generation method with disentangled counterfactual augmentation.
Experiments show that MAGIC outperforms state-of-the-art baselines in both imbalanced and balanced attribute correlation scenarios.
arXiv Detail & Related papers (2024-05-30T11:25:42Z) - Exploiting Modality-Specific Features For Multi-Modal Manipulation
Detection And Grounding [54.49214267905562]
We construct a transformer-based framework for multi-modal manipulation detection and grounding tasks.
Our framework simultaneously explores modality-specific features while preserving the capability for multi-modal alignment.
We propose an implicit manipulation query (IMQ) that adaptively aggregates global contextual cues within each modality.
arXiv Detail & Related papers (2023-09-22T06:55:41Z) - Seen to Unseen: Exploring Compositional Generalization of
Multi-Attribute Controllable Dialogue Generation [23.79168163871952]
Existing controllable dialogue generation work focuses on the single-attribute control.
We propose a prompt-based disentangled controllable dialogue generation model, DCG.
arXiv Detail & Related papers (2023-06-17T10:50:19Z) - MacLaSa: Multi-Aspect Controllable Text Generation via Efficient
Sampling from Compact Latent Space [110.85888003111653]
Multi-aspect controllable text generation aims to generate fluent sentences that possess multiple desired attributes simultaneously.
We introduce a novel approach for multi-aspect control, namely MacLaSa, that estimates compact latent space for multiple aspects.
We show that MacLaSa outperforms several strong baselines on attribute relevance and textual quality while maintaining a high inference speed.
arXiv Detail & Related papers (2023-05-22T07:30:35Z) - A Distributional Lens for Multi-Aspect Controllable Text Generation [17.97374410245602]
Multi-aspect controllable text generation is a more challenging and practical task than single-aspect control.
Existing methods achieve complex multi-aspect control by fusing multiple controllers learned from single-aspect.
We propose to directly search for the intersection areas of multiple attribute distributions as their combination for generation.
arXiv Detail & Related papers (2022-10-06T13:08:04Z) - Controllable Dialogue Generation with Disentangled Multi-grained Style
Specification and Attribute Consistency Reward [47.96949534259019]
We propose a controllable dialogue generation model to steer response generation under multi-attribute constraints.
We categorize the commonly used control attributes into global and local ones, which possess different granularities of effects on response generation.
Our model can significantly outperform competitive baselines in terms of response quality, content diversity and controllability.
arXiv Detail & Related papers (2021-09-14T14:29:38Z) - Is Disentanglement enough? On Latent Representations for Controllable
Music Generation [78.8942067357231]
In the absence of a strong generative decoder, disentanglement does not necessarily imply controllability.
The structure of the latent space with respect to the VAE-decoder plays an important role in boosting the ability of a generative model to manipulate different attributes.
arXiv Detail & Related papers (2021-08-01T18:37:43Z) - Controllable Text Generation with Focused Variation [71.07811310799664]
Focused-Variation Network (FVN) is a novel model to control language generation.
FVN learns disjoint discrete latent spaces for each attribute inside codebooks, which allows for both controllability and diversity.
We evaluate FVN on two text generation datasets with annotated content and style, and show state-of-the-art performance as assessed by automatic and human evaluations.
arXiv Detail & Related papers (2020-09-25T06:31:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.