Topics as Entity Clusters: Entity-based Topics from Language Models and
Graph Neural Networks
- URL: http://arxiv.org/abs/2301.02458v1
- Date: Fri, 6 Jan 2023 10:54:54 GMT
- Title: Topics as Entity Clusters: Entity-based Topics from Language Models and
Graph Neural Networks
- Authors: Manuel V. Loureiro, Steven Derby and Tri Kurniawan Wijaya
- Abstract summary: We propose a novel approach for cluster-based topic modeling that employs conceptual entities.
Entities are language-agnostic representations of real-world concepts rich in relational information.
We demonstrate that our approach consistently outperforms other state-of-the-art topic models across coherency metrics.
- Score: 0.7734726150561089
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Topic models aim to reveal the latent structure behind a corpus, typically
conducted over a bag-of-words representation of documents. In the context of
topic modeling, most vocabulary is either irrelevant for uncovering underlying
topics or contains strong relationships with relevant concepts, impacting the
interpretability of these topics. Furthermore, their limited expressiveness and
dependency on language demand considerable computation resources. Hence, we
propose a novel approach for cluster-based topic modeling that employs
conceptual entities. Entities are language-agnostic representations of
real-world concepts rich in relational information. To this end, we extract
vector representations of entities from (i) an encyclopedic corpus using a
language model; and (ii) a knowledge base using a graph neural network. We
demonstrate that our approach consistently outperforms other state-of-the-art
topic models across coherency metrics and find that the explicit knowledge
encoded in the graph-based embeddings provides more coherent topics than the
implicit knowledge encoded with the contextualized embeddings of language
models.
Related papers
- On the Geometry of Semantics in Next-token Prediction [27.33243506775655]
Modern language models capture linguistic meaning despite being trained solely through next-token prediction.<n>We investigate how this conceptually simple training objective leads models to extract and encode latent semantic and grammatical concepts.<n>Our work bridges distributional semantics, neural collapse geometry, and neural network training dynamics, providing insights into how NTP's implicit biases shape the emergence of meaning representations in language models.
arXiv Detail & Related papers (2025-05-13T08:46:04Z) - MaterioMiner -- An ontology-based text mining dataset for extraction of process-structure-property entities [0.0]
We present the MaterioMiner dataset and the materials ontology where ontological concepts are associated with textual entities.
We explore the consistency between the three raters and perform fine-process-trained models to showcase the feasibility of named-process recognition model training.
arXiv Detail & Related papers (2024-08-05T21:42:59Z) - How Well Do Text Embedding Models Understand Syntax? [50.440590035493074]
The ability of text embedding models to generalize across a wide range of syntactic contexts remains under-explored.
Our findings reveal that existing text embedding models have not sufficiently addressed these syntactic understanding challenges.
We propose strategies to augment the generalization ability of text embedding models in diverse syntactic scenarios.
arXiv Detail & Related papers (2023-11-14T08:51:00Z) - SINC: Self-Supervised In-Context Learning for Vision-Language Tasks [64.44336003123102]
We propose a framework to enable in-context learning in large language models.
A meta-model can learn on self-supervised prompts consisting of tailored demonstrations.
Experiments show that SINC outperforms gradient-based methods in various vision-language tasks.
arXiv Detail & Related papers (2023-07-15T08:33:08Z) - Constructing Word-Context-Coupled Space Aligned with Associative
Knowledge Relations for Interpretable Language Modeling [0.0]
The black-box structure of the deep neural network in pre-trained language models seriously limits the interpretability of the language modeling process.
A Word-Context-Coupled Space (W2CSpace) is proposed by introducing the alignment processing between uninterpretable neural representation and interpretable statistical logic.
Our language model can achieve better performance and highly credible interpretable ability compared to related state-of-the-art methods.
arXiv Detail & Related papers (2023-05-19T09:26:02Z) - Perceptual Grouping in Contrastive Vision-Language Models [59.1542019031645]
We show how vision-language models are able to understand where objects reside within an image and group together visually related parts of the imagery.
We propose a minimal set of modifications that results in models that uniquely learn both semantic and spatial information.
arXiv Detail & Related papers (2022-10-18T17:01:35Z) - Learning Attention-based Representations from Multiple Patterns for
Relation Prediction in Knowledge Graphs [2.4028383570062606]
AEMP is a novel model for learning contextualized representations by acquiring entities' context information.
AEMP either outperforms or competes with state-of-the-art relation prediction methods.
arXiv Detail & Related papers (2022-06-07T10:53:35Z) - Text analysis and deep learning: A network approach [0.0]
We propose a novel method that combines transformer models with network analysis to form a self-referential representation of language use within a corpus of interest.
Our approach produces linguistic relations strongly consistent with the underlying model as well as mathematically well-defined operations on them.
It represents, to the best of our knowledge, the first unsupervised method to extract semantic networks directly from deep language models.
arXiv Detail & Related papers (2021-10-08T14:18:36Z) - High-dimensional distributed semantic spaces for utterances [0.2907403645801429]
This paper describes a model for high-dimensional representation for utterance and text level data.
It is based on a mathematically principled and behaviourally plausible approach to representing linguistic information.
The paper shows how the implemented model is able to represent a broad range of linguistic features in a common integral framework of fixed dimensionality.
arXiv Detail & Related papers (2021-04-01T12:09:47Z) - Infusing Finetuning with Semantic Dependencies [62.37697048781823]
We show that, unlike syntax, semantics is not brought to the surface by today's pretrained models.
We then use convolutional graph encoders to explicitly incorporate semantic parses into task-specific finetuning.
arXiv Detail & Related papers (2020-12-10T01:27:24Z) - Neural Entity Linking: A Survey of Models Based on Deep Learning [82.43751915717225]
This survey presents a comprehensive description of recent neural entity linking (EL) systems developed since 2015.
Its goal is to systemize design features of neural entity linking systems and compare their performance to the remarkable classic methods on common benchmarks.
The survey touches on applications of entity linking, focusing on the recently emerged use-case of enhancing deep pre-trained masked language models.
arXiv Detail & Related papers (2020-05-31T18:02:26Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z) - How Far are We from Effective Context Modeling? An Exploratory Study on
Semantic Parsing in Context [59.13515950353125]
We present a grammar-based decoding semantic parsing and adapt typical context modeling methods on top of it.
We evaluate 13 context modeling methods on two large cross-domain datasets, and our best model achieves state-of-the-art performances.
arXiv Detail & Related papers (2020-02-03T11:28:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.