Bilingual Topic Models for Comparable Corpora
- URL: http://arxiv.org/abs/2111.15278v1
- Date: Tue, 30 Nov 2021 10:53:41 GMT
- Title: Bilingual Topic Models for Comparable Corpora
- Authors: Georgios Balikas, Massih-Reza Amini, Marianne Clausel
- Abstract summary: We propose a binding mechanism between the distributions of the paired documents.
To estimate the similarity of documents that are written in different languages we use cross-lingual word embeddings that are learned with shallow neural networks.
We evaluate the proposed binding mechanism by extending two topic models: a bilingual adaptation of LDA that assumes bag-of-words inputs and a model that incorporates part of the text structure in the form of boundaries of semantically coherent segments.
- Score: 9.509416095106491
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Probabilistic topic models like Latent Dirichlet Allocation (LDA) have been
previously extended to the bilingual setting. A fundamental modeling assumption
in several of these extensions is that the input corpora are in the form of
document pairs whose constituent documents share a single topic distribution.
However, this assumption is strong for comparable corpora that consist of
documents thematically similar to an extent only, which are, in turn, the most
commonly available or easy to obtain. In this paper we relax this assumption by
proposing for the paired documents to have separate, yet bound topic
distributions. % a binding mechanism between the distributions of the paired
documents. We suggest that the strength of the bound should depend on each
pair's semantic similarity. To estimate the similarity of documents that are
written in different languages we use cross-lingual word embeddings that are
learned with shallow neural networks. We evaluate the proposed binding
mechanism by extending two topic models: a bilingual adaptation of LDA that
assumes bag-of-words inputs and a model that incorporates part of the text
structure in the form of boundaries of semantically coherent segments. To
assess the performance of the novel topic models we conduct intrinsic and
extrinsic experiments on five bilingual, comparable corpora of English
documents with French, German, Italian, Spanish and Portuguese documents. The
results demonstrate the efficiency of our approach in terms of both topic
coherence measured by the normalized point-wise mutual information, and
generalization performance measured by perplexity and in terms of Mean
Reciprocal Rank in a cross-lingual document retrieval task for each of the
language pairs.
Related papers
- Graph2topic: an opensource topic modeling framework based on sentence
embedding and community detection [1.6242924916178283]
Clustering-based topic models can generate better topics than generative probabilistic topic models.
We propose graph to topic (G2T), a simple but effective framework for topic modelling.
G2T achieved state-of-the-art performance on both English and Chinese documents with different lengths.
arXiv Detail & Related papers (2023-04-13T16:28:07Z) - Topics in the Haystack: Extracting and Evaluating Topics beyond
Coherence [0.0]
We propose a method that incorporates a deeper understanding of both sentence and document themes.
This allows our model to detect latent topics that may include uncommon words or neologisms.
We present correlation coefficients with human identification of intruder words and achieve near-human level results at the word-intrusion task.
arXiv Detail & Related papers (2023-03-30T12:24:25Z) - Beyond Contrastive Learning: A Variational Generative Model for
Multilingual Retrieval [109.62363167257664]
We propose a generative model for learning multilingual text embeddings.
Our model operates on parallel data in $N$ languages.
We evaluate this method on a suite of tasks including semantic similarity, bitext mining, and cross-lingual question retrieval.
arXiv Detail & Related papers (2022-12-21T02:41:40Z) - Document-Level Relation Extraction with Sentences Importance Estimation
and Focusing [52.069206266557266]
Document-level relation extraction (DocRE) aims to determine the relation between two entities from a document of multiple sentences.
We propose a Sentence Estimation and Focusing (SIEF) framework for DocRE, where we design a sentence importance score and a sentence focusing loss.
Experimental results on two domains show that our SIEF not only improves overall performance, but also makes DocRE models more robust.
arXiv Detail & Related papers (2022-04-27T03:20:07Z) - Models and Datasets for Cross-Lingual Summarisation [78.56238251185214]
We present a cross-lingual summarisation corpus with long documents in a source language associated with multi-sentence summaries in a target language.
The corpus covers twelve language pairs and directions for four European languages, namely Czech, English, French and German.
We derive cross-lingual document-summary instances from Wikipedia by combining lead paragraphs and articles' bodies from language aligned Wikipedia titles.
arXiv Detail & Related papers (2022-02-19T11:55:40Z) - Coherence-Based Distributed Document Representation Learning for
Scientific Documents [9.646001537050925]
We propose a coupled text pair embedding (CTPE) model to learn the representation of scientific documents.
We use negative sampling to construct uncoupled text pairs whose two parts are from different documents.
We train the model to judge whether the text pair is coupled or uncoupled and use the obtained embedding of coupled text pairs as the embedding of documents.
arXiv Detail & Related papers (2022-01-08T15:29:21Z) - SMDT: Selective Memory-Augmented Neural Document Translation [53.4627288890316]
We propose a Selective Memory-augmented Neural Document Translation model to deal with documents containing large hypothesis space of context.
We retrieve similar bilingual sentence pairs from the training corpus to augment global context.
We extend the two-stream attention model with selective mechanism to capture local context and diverse global contexts.
arXiv Detail & Related papers (2022-01-05T14:23:30Z) - Author Clustering and Topic Estimation for Short Texts [69.54017251622211]
We propose a novel model that expands on the Latent Dirichlet Allocation by modeling strong dependence among the words in the same document.
We also simultaneously cluster users, removing the need for post-hoc cluster estimation.
Our method performs as well as -- or better -- than traditional approaches to problems arising in short text.
arXiv Detail & Related papers (2021-06-15T20:55:55Z) - Nutribullets Hybrid: Multi-document Health Summarization [36.95954983680022]
We present a method for generating comparative summaries that highlights similarities and contradictions in input documents.
Our framework leads to more faithful, relevant and aggregation-sensitive summarization -- while being equally fluent.
arXiv Detail & Related papers (2021-04-08T01:44:29Z) - Scalable Cross-lingual Document Similarity through Language-specific
Concept Hierarchies [0.0]
This paper presents an unsupervised document similarity algorithm that does not require parallel or comparable corpora.
The algorithm annotates topics automatically created from documents in a single language with cross-lingual labels.
Experiments performed on the English, Spanish and French editions of JCR-Acquis corpora reveal promising results on classifying and sorting documents by similar content.
arXiv Detail & Related papers (2020-12-15T10:42:40Z) - Towards Making the Most of Context in Neural Machine Translation [112.9845226123306]
We argue that previous research did not make a clear use of the global context.
We propose a new document-level NMT framework that deliberately models the local context of each sentence.
arXiv Detail & Related papers (2020-02-19T03:30:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.