Towards Generalising Neural Topical Representations
- URL: http://arxiv.org/abs/2307.12564v4
- Date: Thu, 13 Jun 2024 02:14:30 GMT
- Title: Towards Generalising Neural Topical Representations
- Authors: Xiaohao Yang, He Zhao, Dinh Phung, Lan Du,
- Abstract summary: Topic models have evolved from conventional Bayesian probabilistic models to recent Neural Topic Models (NTMs)
NTMs have shown promising performance when trained and tested on a specific corpus, but their generalisation ability across corpora has yet to be studied.
In this work, we aim to improve NTMs so that their representation power for documents generalises reliably across corpora and tasks.
- Score: 12.566953840563704
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Topic models have evolved from conventional Bayesian probabilistic models to recent Neural Topic Models (NTMs). Although NTMs have shown promising performance when trained and tested on a specific corpus, their generalisation ability across corpora has yet to be studied. In practice, we often expect that an NTM trained on a source corpus can still produce quality topical representation (i.e., latent distribution over topics) for the document from different target corpora to a certain degree. In this work, we aim to improve NTMs further so that their representation power for documents generalises reliably across corpora and tasks. To do so, we propose to enhance NTMs by narrowing the semantic distance between similar documents, with the underlying assumption that documents from different corpora may share similar semantics. Specifically, we obtain a similar document for each training document by text data augmentation. Then, we optimise NTMs further by minimising the semantic distance between each pair, measured by the Topical Optimal Transport (TopicalOT) distance, which computes the optimal transport distance between their topical representations. Our framework can be readily applied to most NTMs as a plug-and-play module. Extensive experiments show that our framework significantly improves the generalisation ability regarding neural topical representation across corpora. Our code and datasets are available at: https://github.com/Xiaohao-Yang/Topic_Model_Generalisation.
Related papers
- Improving the TENOR of Labeling: Re-evaluating Topic Models for Content
Analysis [5.757610495733924]
We conduct the first evaluation of neural, supervised and classical topic models in an interactive task based setting.
We show that current automated metrics do not provide a complete picture of topic modeling capabilities.
arXiv Detail & Related papers (2024-01-29T17:54:04Z) - Unified Model Learning for Various Neural Machine Translation [63.320005222549646]
Existing machine translation (NMT) studies mainly focus on developing dataset-specific models.
We propose a versatile'' model, i.e., the Unified Model Learning for NMT (UMLNMT) that works with data from different tasks.
OurNMT results in substantial improvements over dataset-specific models with significantly reduced model deployment costs.
arXiv Detail & Related papers (2023-05-04T12:21:52Z) - Learning to Generalize to More: Continuous Semantic Augmentation for
Neural Machine Translation [50.54059385277964]
We present a novel data augmentation paradigm termed Continuous Semantic Augmentation (CsaNMT)
CsaNMT augments each training instance with an adjacency region that could cover adequate variants of literal expression under the same meaning.
arXiv Detail & Related papers (2022-04-14T08:16:28Z) - Efficient Nearest Neighbor Language Models [114.40866461741795]
Non-parametric neural language models (NLMs) learn predictive distributions of text utilizing an external datastore.
We show how to achieve up to a 6x speed-up in inference speed while retaining comparable performance.
arXiv Detail & Related papers (2021-09-09T12:32:28Z) - Exploring Unsupervised Pretraining Objectives for Machine Translation [99.5441395624651]
Unsupervised cross-lingual pretraining has achieved strong results in neural machine translation (NMT)
Most approaches adapt masked-language modeling (MLM) to sequence-to-sequence architectures, by masking parts of the input and reconstructing them in the decoder.
We compare masking with alternative objectives that produce inputs resembling real (full) sentences, by reordering and replacing words based on their context.
arXiv Detail & Related papers (2021-06-10T10:18:23Z) - TAN-NTM: Topic Attention Networks for Neural Topic Modeling [8.631228373008478]
We propose a novel framework: TAN-NTM which models document as a sequence of tokens instead of BoW at the input layer.
We apply attention on LSTM outputs to empower the model to attend on relevant words which convey topic related cues.
TAN-NTM achieves state-of-the-art results with 9-15 percentage improvement over score of existing SOTA topic models in NPMI coherence metric.
arXiv Detail & Related papers (2020-12-02T20:58:04Z) - Neural Topic Model via Optimal Transport [24.15046280736009]
We present a new neural topic model via the theory of optimal transport (OT)
Specifically, we propose to learn the topic distribution of a document by directly minimising its OT distance to the document's word distributions.
Our proposed model can be trained efficiently with a differentiable loss.
arXiv Detail & Related papers (2020-08-12T06:37:09Z) - A Novel Graph-based Multi-modal Fusion Encoder for Neural Machine
Translation [131.33610549540043]
We propose a novel graph-based multi-modal fusion encoder for NMT.
We first represent the input sentence and image using a unified multi-modal graph.
We then stack multiple graph-based multi-modal fusion layers that iteratively perform semantic interactions to learn node representations.
arXiv Detail & Related papers (2020-07-17T04:06:09Z) - Towards Making the Most of Context in Neural Machine Translation [112.9845226123306]
We argue that previous research did not make a clear use of the global context.
We propose a new document-level NMT framework that deliberately models the local context of each sentence.
arXiv Detail & Related papers (2020-02-19T03:30:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.