Advanced Conditional Variational Autoencoders (A-CVAE): Towards
interpreting open-domain conversation generation via disentangling latent
feature representation
- URL: http://arxiv.org/abs/2207.12696v1
- Date: Tue, 26 Jul 2022 07:39:36 GMT
- Title: Advanced Conditional Variational Autoencoders (A-CVAE): Towards
interpreting open-domain conversation generation via disentangling latent
feature representation
- Authors: Ye Wang, Jingbo Liao, Hong Yu, Guoyin Wang, Xiaoxia Zhang and Li Liu
- Abstract summary: This paper proposes to harness the generative model with a priori knowledge through a cognitive approach involving mesoscopic scale feature disentanglement.
We propose a new metric for open-domain dialogues, which can objectively evaluate the interpretability of the latent space distribution.
- Score: 15.742077523458995
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Currently end-to-end deep learning based open-domain dialogue systems remain
black box models, making it easy to generate irrelevant contents with
data-driven models. Specifically, latent variables are highly entangled with
different semantics in the latent space due to the lack of priori knowledge to
guide the training. To address this problem, this paper proposes to harness the
generative model with a priori knowledge through a cognitive approach involving
mesoscopic scale feature disentanglement. Particularly, the model integrates
the macro-level guided-category knowledge and micro-level open-domain dialogue
data for the training, leveraging the priori knowledge into the latent space,
which enables the model to disentangle the latent variables within the
mesoscopic scale. Besides, we propose a new metric for open-domain dialogues,
which can objectively evaluate the interpretability of the latent space
distribution. Finally, we validate our model on different datasets and
experimentally demonstrate that our model is able to generate higher quality
and more interpretable dialogues than other models.
Related papers
- SMILE: Zero-Shot Sparse Mixture of Low-Rank Experts Construction From Pre-Trained Foundation Models [85.67096251281191]
We present an innovative approach to model fusion called zero-shot Sparse MIxture of Low-rank Experts (SMILE) construction.
SMILE allows for the upscaling of source models into an MoE model without extra data or further training.
We conduct extensive experiments across diverse scenarios, such as image classification and text generation tasks, using full fine-tuning and LoRA fine-tuning.
arXiv Detail & Related papers (2024-08-19T17:32:15Z) - MergeNet: Knowledge Migration across Heterogeneous Models, Tasks, and Modalities [72.68829963458408]
We present MergeNet, which learns to bridge the gap of parameter spaces of heterogeneous models.
The core mechanism of MergeNet lies in the parameter adapter, which operates by querying the source model's low-rank parameters.
MergeNet is learned alongside both models, allowing our framework to dynamically transfer and adapt knowledge relevant to the current stage.
arXiv Detail & Related papers (2024-04-20T08:34:39Z) - Diversity-Aware Coherence Loss for Improving Neural Topic Models [20.98172300869239]
We propose a novel diversity-aware coherence loss that encourages the model to learn corpus-level coherence scores.
Experimental results on multiple datasets show that our method significantly improves the performance of neural topic models.
arXiv Detail & Related papers (2023-05-25T16:01:56Z) - Large Language Models with Controllable Working Memory [64.71038763708161]
Large language models (LLMs) have led to a series of breakthroughs in natural language processing (NLP)
What further sets these models apart is the massive amounts of world knowledge they internalize during pretraining.
How the model's world knowledge interacts with the factual information presented in the context remains under explored.
arXiv Detail & Related papers (2022-11-09T18:58:29Z) - Learning Semantic Textual Similarity via Topic-informed Discrete Latent
Variables [17.57873577962635]
We develop a topic-informed discrete latent variable model for semantic textual similarity.
Our model learns a shared latent space for sentence-pair representation via vector quantization.
We show that our model is able to surpass several strong neural baselines in semantic textual similarity tasks.
arXiv Detail & Related papers (2022-11-07T15:09:58Z) - Learning Interpretable Latent Dialogue Actions With Less Supervision [3.42658286826597]
We present a novel architecture for explainable modeling of task-oriented dialogues with discrete latent variables.
Our model is based on variational recurrent neural networks (VRNN) and requires no explicit annotation of semantic information.
arXiv Detail & Related papers (2022-09-22T16:14:06Z) - QAGAN: Adversarial Approach To Learning Domain Invariant Language
Features [0.76146285961466]
We explore adversarial training approach towards learning domain-invariant features.
We are able to achieve $15.2%$ improvement in EM score and $5.6%$ boost in F1 score on out-of-domain validation dataset.
arXiv Detail & Related papers (2022-06-24T17:42:18Z) - RevUp: Revise and Update Information Bottleneck for Event Representation [16.54912614895861]
In machine learning, latent variables play a key role to capture the underlying structure of data, but they are often unsupervised.
We propose a semi-supervised information bottleneck-based model that enables the use of side knowledge to direct the learning of discrete latent variables.
We show that our approach generalizes an existing method of parameter injection, and perform an empirical case study of our approach on language-based event modeling.
arXiv Detail & Related papers (2022-05-24T17:54:59Z) - Meta-learning using privileged information for dynamics [66.32254395574994]
We extend the Neural ODE Process model to use additional information within the Learning Using Privileged Information setting.
We validate our extension with experiments showing improved accuracy and calibration on simulated dynamics tasks.
arXiv Detail & Related papers (2021-04-29T12:18:02Z) - Context Decoupling Augmentation for Weakly Supervised Semantic
Segmentation [53.49821324597837]
Weakly supervised semantic segmentation is a challenging problem that has been deeply studied in recent years.
We present a Context Decoupling Augmentation ( CDA) method to change the inherent context in which the objects appear.
To validate the effectiveness of the proposed method, extensive experiments on PASCAL VOC 2012 dataset with several alternative network architectures demonstrate that CDA can boost various popular WSSS methods to the new state-of-the-art by a large margin.
arXiv Detail & Related papers (2021-03-02T15:05:09Z) - Improve Variational Autoencoder for Text Generationwith Discrete Latent
Bottleneck [52.08901549360262]
Variational autoencoders (VAEs) are essential tools in end-to-end representation learning.
VAEs tend to ignore latent variables with a strong auto-regressive decoder.
We propose a principled approach to enforce an implicit latent feature matching in a more compact latent space.
arXiv Detail & Related papers (2020-04-22T14:41:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.