Neural Topic Modeling with Bidirectional Adversarial Training
- URL: http://arxiv.org/abs/2004.12331v1
- Date: Sun, 26 Apr 2020 09:41:17 GMT
- Title: Neural Topic Modeling with Bidirectional Adversarial Training
- Authors: Rui Wang, Xuemeng Hu, Deyu Zhou, Yulan He, Yuxuan Xiong, Chenchen Ye,
Haiyang Xu
- Abstract summary: We propose a neural topic modeling approach called Bidirectional Adversarial Topic (BAT) model.
BAT builds a two-way projection between the document-topic distribution and the document-word distribution.
To incorporate word relatedness information, the Bidirectional Adversarial Topic model with Gaussian (Gaussian-BAT) is extended.
- Score: 37.71988156164695
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent years have witnessed a surge of interests of using neural topic models
for automatic topic extraction from text, since they avoid the complicated
mathematical derivations for model inference as in traditional topic models
such as Latent Dirichlet Allocation (LDA). However, these models either
typically assume improper prior (e.g. Gaussian or Logistic Normal) over latent
topic space or could not infer topic distribution for a given document. To
address these limitations, we propose a neural topic modeling approach, called
Bidirectional Adversarial Topic (BAT) model, which represents the first attempt
of applying bidirectional adversarial training for neural topic modeling. The
proposed BAT builds a two-way projection between the document-topic
distribution and the document-word distribution. It uses a generator to capture
the semantic patterns from texts and an encoder for topic inference.
Furthermore, to incorporate word relatedness information, the Bidirectional
Adversarial Topic model with Gaussian (Gaussian-BAT) is extended from BAT. To
verify the effectiveness of BAT and Gaussian-BAT, three benchmark corpora are
used in our experiments. The experimental results show that BAT and
Gaussian-BAT obtain more coherent topics, outperforming several competitive
baselines. Moreover, when performing text clustering based on the extracted
topics, our models outperform all the baselines, with more significant
improvements achieved by Gaussian-BAT where an increase of near 6\% is observed
in accuracy.
Related papers
- Sub-graph Based Diffusion Model for Link Prediction [43.15741675617231]
Denoising Diffusion Probabilistic Models (DDPMs) represent a contemporary class of generative models with exceptional qualities.
We build a novel generative model for link prediction using a dedicated design to decompose the likelihood estimation process via the Bayesian formula.
Our proposed method presents numerous advantages: (1) transferability across datasets without retraining, (2) promising generalization on limited training data, and (3) robustness against graph adversarial attacks.
arXiv Detail & Related papers (2024-09-13T02:23:55Z) - Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution [67.9215891673174]
We propose score entropy as a novel loss that naturally extends score matching to discrete spaces.
We test our Score Entropy Discrete Diffusion models on standard language modeling tasks.
arXiv Detail & Related papers (2023-10-25T17:59:12Z) - RegaVAE: A Retrieval-Augmented Gaussian Mixture Variational Auto-Encoder
for Language Modeling [79.56442336234221]
We introduce RegaVAE, a retrieval-augmented language model built upon the variational auto-encoder (VAE)
It encodes the text corpus into a latent space, capturing current and future information from both source and target text.
Experimental results on various datasets demonstrate significant improvements in text generation quality and hallucination removal.
arXiv Detail & Related papers (2023-10-16T16:42:01Z) - TWINS: A Fine-Tuning Framework for Improved Transferability of
Adversarial Robustness and Generalization [89.54947228958494]
This paper focuses on the fine-tuning of an adversarially pre-trained model in various classification tasks.
We propose a novel statistics-based approach, Two-WIng NormliSation (TWINS) fine-tuning framework.
TWINS is shown to be effective on a wide range of image classification datasets in terms of both generalization and robustness.
arXiv Detail & Related papers (2023-03-20T14:12:55Z) - Neural Dynamic Focused Topic Model [2.9005223064604078]
We leverage recent advances in neural variational inference and present an alternative neural approach to the dynamic Focused Topic Model.
We develop a neural model for topic evolution which exploits sequences of Bernoulli random variables in order to track the appearances of topics.
arXiv Detail & Related papers (2023-01-26T08:37:34Z) - A Joint Learning Approach for Semi-supervised Neural Topic Modeling [25.104653662416023]
We introduce the Label-Indexed Neural Topic Model (LI-NTM), which is the first effective upstream semi-supervised neural topic model.
We find that LI-NTM outperforms existing neural topic models in document reconstruction benchmarks.
arXiv Detail & Related papers (2022-04-07T04:42:17Z) - Improving Neural Topic Models using Knowledge Distillation [84.66983329587073]
We use knowledge distillation to combine the best attributes of probabilistic topic models and pretrained transformers.
Our modular method can be straightforwardly applied with any neural topic model to improve topic quality.
arXiv Detail & Related papers (2020-10-05T22:49:16Z) - Neural Topic Modeling with Cycle-Consistent Adversarial Training [17.47328718035538]
We propose Topic Modeling with Cycle-consistent Adversarial Training (ToMCAT) and its supervised version sToMCAT.
ToMCAT employs a generator network to interpret topics and an encoder network to infer document topics.
SToMCAT extends ToMCAT by incorporating document labels into the topic modeling process to help discover more coherent topics.
arXiv Detail & Related papers (2020-09-29T12:41:27Z) - Context Reinforced Neural Topic Modeling over Short Texts [15.487822291146689]
We propose a Context Reinforced Neural Topic Model (CRNTM)
CRNTM infers the topic for each word in a narrow range by assuming that each short text covers only a few salient topics.
Experiments on two benchmark datasets validate the effectiveness of the proposed model on both topic discovery and text classification.
arXiv Detail & Related papers (2020-08-11T06:41:53Z) - Topic Adaptation and Prototype Encoding for Few-Shot Visual Storytelling [81.33107307509718]
We propose a topic adaptive storyteller to model the ability of inter-topic generalization.
We also propose a prototype encoding structure to model the ability of intra-topic derivation.
Experimental results show that topic adaptation and prototype encoding structure mutually bring benefit to the few-shot model.
arXiv Detail & Related papers (2020-08-11T03:55:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.