Have you tried Neural Topic Models? Comparative Analysis of Neural and
Non-Neural Topic Models with Application to COVID-19 Twitter Data
- URL: http://arxiv.org/abs/2105.10165v1
- Date: Fri, 21 May 2021 07:24:09 GMT
- Title: Have you tried Neural Topic Models? Comparative Analysis of Neural and
Non-Neural Topic Models with Application to COVID-19 Twitter Data
- Authors: Andrew Bennett, Dipendra Misra, and Nga Than
- Abstract summary: We conduct a comparative study examining state-of-the-art neural versus non-neural topic models.
We show that neural topic models outperform their classical counterparts on standard evaluation metrics.
We also propose a novel regularization term for neural topic models, which is designed to address the well-documented problem of mode collapse.
- Score: 11.199249808462458
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Topic models are widely used in studying social phenomena. We conduct a
comparative study examining state-of-the-art neural versus non-neural topic
models, performing a rigorous quantitative and qualitative assessment on a
dataset of tweets about the COVID-19 pandemic. Our results show that not only
do neural topic models outperform their classical counterparts on standard
evaluation metrics, but they also produce more coherent topics, which are of
great benefit when studying complex social problems. We also propose a novel
regularization term for neural topic models, which is designed to address the
well-documented problem of mode collapse, and demonstrate its effectiveness.
Related papers
- Historia Magistra Vitae: Dynamic Topic Modeling of Roman Literature using Neural Embeddings [10.095706051685665]
We compare topic models built using traditional statistical models (LDA and NMF) and the BERT-based model.
We find that while quantitative metrics prefer statistical models, qualitative evaluation finds better insights from the neural model.
arXiv Detail & Related papers (2024-06-27T05:38:49Z) - Improving the TENOR of Labeling: Re-evaluating Topic Models for Content
Analysis [5.757610495733924]
We conduct the first evaluation of neural, supervised and classical topic models in an interactive task based setting.
We show that current automated metrics do not provide a complete picture of topic modeling capabilities.
arXiv Detail & Related papers (2024-01-29T17:54:04Z) - Do Neural Topic Models Really Need Dropout? Analysis of the Effect of
Dropout in Topic Modeling [0.6445605125467573]
Dropout is a widely used regularization trick to resolve the overfitting issue in large feedforward neural networks trained on a small dataset.
We have analyzed the consequences of dropout in the encoder as well as in the decoder of the VAE architecture in three widely used neural topic models.
arXiv Detail & Related papers (2023-03-28T13:45:39Z) - Neural Dynamic Focused Topic Model [2.9005223064604078]
We leverage recent advances in neural variational inference and present an alternative neural approach to the dynamic Focused Topic Model.
We develop a neural model for topic evolution which exploits sequences of Bernoulli random variables in order to track the appearances of topics.
arXiv Detail & Related papers (2023-01-26T08:37:34Z) - Cross-Model Comparative Loss for Enhancing Neuronal Utility in Language
Understanding [82.46024259137823]
We propose a cross-model comparative loss for a broad range of tasks.
We demonstrate the universal effectiveness of comparative loss through extensive experiments on 14 datasets from 3 distinct NLU tasks.
arXiv Detail & Related papers (2023-01-10T03:04:27Z) - Are Neural Topic Models Broken? [81.15470302729638]
We study the relationship between automated and human evaluation of topic models.
We find that neural topic models fare worse in both respects compared to an established classical method.
arXiv Detail & Related papers (2022-10-28T14:38:50Z) - A Joint Learning Approach for Semi-supervised Neural Topic Modeling [25.104653662416023]
We introduce the Label-Indexed Neural Topic Model (LI-NTM), which is the first effective upstream semi-supervised neural topic model.
We find that LI-NTM outperforms existing neural topic models in document reconstruction benchmarks.
arXiv Detail & Related papers (2022-04-07T04:42:17Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Is Automated Topic Model Evaluation Broken?: The Incoherence of
Coherence [62.826466543958624]
We look at the standardization gap and the validation gap in topic model evaluation.
Recent models relying on neural components surpass classical topic models according to these metrics.
We use automatic coherence along with the two most widely accepted human judgment tasks, namely, topic rating and word intrusion.
arXiv Detail & Related papers (2021-07-05T17:58:52Z) - Improving Neural Topic Models using Knowledge Distillation [84.66983329587073]
We use knowledge distillation to combine the best attributes of probabilistic topic models and pretrained transformers.
Our modular method can be straightforwardly applied with any neural topic model to improve topic quality.
arXiv Detail & Related papers (2020-10-05T22:49:16Z) - Rethinking Generalization of Neural Models: A Named Entity Recognition
Case Study [81.11161697133095]
We take the NER task as a testbed to analyze the generalization behavior of existing models from different perspectives.
Experiments with in-depth analyses diagnose the bottleneck of existing neural NER models.
As a by-product of this paper, we have open-sourced a project that involves a comprehensive summary of recent NER papers.
arXiv Detail & Related papers (2020-01-12T04:33:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.