Towards Better Understanding with Uniformity and Explicit Regularization
of Embeddings in Embedding-based Neural Topic Models
- URL: http://arxiv.org/abs/2206.07960v1
- Date: Thu, 16 Jun 2022 07:02:55 GMT
- Title: Towards Better Understanding with Uniformity and Explicit Regularization
of Embeddings in Embedding-based Neural Topic Models
- Authors: Wei Shao, Lei Huang, Shuqi Liu, Shihua Ma, Linqi Song
- Abstract summary: Embedding-based neural topic models could explicitly represent words and topics by embedding them to a homogeneous feature space.
There are no explicit constraints for the training of embeddings, leading to a larger optimization space.
We propose an embedding regularized neural topic model, which applies the specially designed training constraints on word embedding and topic embedding.
- Score: 16.60033525943772
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Embedding-based neural topic models could explicitly represent words and
topics by embedding them to a homogeneous feature space, which shows higher
interpretability. However, there are no explicit constraints for the training
of embeddings, leading to a larger optimization space. Also, a clear
description of the changes in embeddings and the impact on model performance is
still lacking. In this paper, we propose an embedding regularized neural topic
model, which applies the specially designed training constraints on word
embedding and topic embedding to reduce the optimization space of parameters.
To reveal the changes and roles of embeddings, we introduce \textbf{uniformity}
into the embedding-based neural topic model as the evaluation metric of
embedding space. On this basis, we describe how embeddings tend to change
during training via the changes in the uniformity of embeddings. Furthermore,
we demonstrate the impact of changes in embeddings in embedding-based neural
topic models through ablation studies. The results of experiments on two
mainstream datasets indicate that our model significantly outperforms baseline
models in terms of the harmony between topic quality and document modeling.
This work is the first attempt to exploit uniformity to explore changes in
embeddings of embedding-based neural topic models and their impact on model
performance to the best of our knowledge.
Related papers
- SMILE: Zero-Shot Sparse Mixture of Low-Rank Experts Construction From Pre-Trained Foundation Models [85.67096251281191]
We present an innovative approach to model fusion called zero-shot Sparse MIxture of Low-rank Experts (SMILE) construction.
SMILE allows for the upscaling of source models into an MoE model without extra data or further training.
We conduct extensive experiments across diverse scenarios, such as image classification and text generation tasks, using full fine-tuning and LoRA fine-tuning.
arXiv Detail & Related papers (2024-08-19T17:32:15Z) - Enhancing Dynamical System Modeling through Interpretable Machine
Learning Augmentations: A Case Study in Cathodic Electrophoretic Deposition [0.8796261172196743]
We introduce a comprehensive data-driven framework aimed at enhancing the modeling of physical systems.
As a demonstrative application, we pursue the modeling of cathodic electrophoretic deposition (EPD), commonly known as e-coating.
arXiv Detail & Related papers (2024-01-16T14:58:21Z) - A PAC-Bayesian Perspective on the Interpolating Information Criterion [54.548058449535155]
We show how a PAC-Bayes bound is obtained for a general class of models, characterizing factors which influence performance in the interpolating regime.
We quantify how the test error for overparameterized models achieving effectively zero training error depends on the quality of the implicit regularization imposed by e.g. the combination of model, parameter-initialization scheme.
arXiv Detail & Related papers (2023-11-13T01:48:08Z) - On the Embedding Collapse when Scaling up Recommendation Models [53.66285358088788]
We identify the embedding collapse phenomenon as the inhibition of scalability, wherein the embedding matrix tends to occupy a low-dimensional subspace.
We propose a simple yet effective multi-embedding design incorporating embedding-set-specific interaction modules to learn embedding sets with large diversity.
arXiv Detail & Related papers (2023-10-06T17:50:38Z) - Diversity-Aware Coherence Loss for Improving Neural Topic Models [20.98172300869239]
We propose a novel diversity-aware coherence loss that encourages the model to learn corpus-level coherence scores.
Experimental results on multiple datasets show that our method significantly improves the performance of neural topic models.
arXiv Detail & Related papers (2023-05-25T16:01:56Z) - Robust Graph Representation Learning via Predictive Coding [46.22695915912123]
Predictive coding is a message-passing framework initially developed to model information processing in the brain.
In this work, we build models that rely on the message-passing rule of predictive coding.
We show that the proposed models are comparable to standard ones in terms of performance in both inductive and transductive tasks.
arXiv Detail & Related papers (2022-12-09T03:58:22Z) - Improving Topic Segmentation by Injecting Discourse Dependencies [29.353285741379334]
We present a discourse-aware neural topic segmentation model with the injection of above-sentence discourse dependency structures.
Our empirical study on English evaluation datasets shows that injecting above-sentence discourse structures to a neural topic segmenter can substantially improve its performances.
arXiv Detail & Related papers (2022-09-18T18:22:25Z) - On the Generalization and Adaption Performance of Causal Models [99.64022680811281]
Differentiable causal discovery has proposed to factorize the data generating process into a set of modules.
We study the generalization and adaption performance of such modular neural causal models.
Our analysis shows that the modular neural causal models outperform other models on both zero and few-shot adaptation in low data regimes.
arXiv Detail & Related papers (2022-06-09T17:12:32Z) - Generative Counterfactuals for Neural Networks via Attribute-Informed
Perturbation [51.29486247405601]
We design a framework to generate counterfactuals for raw data instances with the proposed Attribute-Informed Perturbation (AIP)
By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently.
Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework.
arXiv Detail & Related papers (2021-01-18T08:37:13Z) - A Discrete Variational Recurrent Topic Model without the
Reparametrization Trick [16.54912614895861]
We show how to learn a neural topic model with discrete random variables.
We show improved perplexity and document understanding across multiple corpora.
arXiv Detail & Related papers (2020-10-22T20:53:44Z) - Improving Neural Topic Models using Knowledge Distillation [84.66983329587073]
We use knowledge distillation to combine the best attributes of probabilistic topic models and pretrained transformers.
Our modular method can be straightforwardly applied with any neural topic model to improve topic quality.
arXiv Detail & Related papers (2020-10-05T22:49:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.