Topic Modelling Meets Deep Neural Networks: A Survey
- URL: http://arxiv.org/abs/2103.00498v1
- Date: Sun, 28 Feb 2021 12:59:28 GMT
- Title: Topic Modelling Meets Deep Neural Networks: A Survey
- Authors: He Zhao, Dinh Phung, Viet Huynh, Yuan Jin, Lan Du, Wray Buntine
- Abstract summary: Topic modelling has been a successful technique for text analysis for almost twenty years.
When topic modelling met deep neural networks, there emerged a new and increasingly popular research area, neural topic models.
This paper provides a focused yet comprehensive overview of neural topic models for interested researchers in the AI community.
- Score: 25.950652301810425
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Topic modelling has been a successful technique for text analysis for almost
twenty years. When topic modelling met deep neural networks, there emerged a
new and increasingly popular research area, neural topic models, with over a
hundred models developed and a wide range of applications in neural language
understanding such as text generation, summarisation and language models. There
is a need to summarise research developments and discuss open problems and
future directions. In this paper, we provide a focused yet comprehensive
overview of neural topic models for interested researchers in the AI community,
so as to facilitate them to navigate and innovate in this fast-growing research
area. To the best of our knowledge, ours is the first review focusing on this
specific topic.
Related papers
- Trends, Applications, and Challenges in Human Attention Modelling [65.61554471033844]
Human attention modelling has proven to be particularly useful for understanding the cognitive processes underlying visual exploration.
It provides support to artificial intelligence models that aim to solve problems in various domains, including image and video processing, vision-and-language applications, and language modelling.
arXiv Detail & Related papers (2024-02-28T19:35:30Z) - A Survey on Neural Topic Models: Methods, Applications, and Challenges [32.510888679613004]
Topic models have been prevalent for decades to discover latent topics and infer topic proportions of documents in an unsupervised fashion.
The rise of neural networks has facilitated the emergence of a new research field -- Neural Topic Models (NTMs)
In this paper, we present a comprehensive survey on neural topic models concerning methods, applications, and challenges.
arXiv Detail & Related papers (2024-01-27T08:52:19Z) - Automated Natural Language Explanation of Deep Visual Neurons with Large
Models [43.178568768100305]
This paper proposes a novel post-hoc framework for generating semantic explanations of neurons with large foundation models.
Our framework is designed to be compatible with various model architectures and datasets, automated and scalable neuron interpretation.
arXiv Detail & Related papers (2023-10-16T17:04:51Z) - Towards Data-and Knowledge-Driven Artificial Intelligence: A Survey on Neuro-Symbolic Computing [73.0977635031713]
Neural-symbolic computing (NeSy) has been an active research area of Artificial Intelligence (AI) for many years.
NeSy shows promise of reconciling the advantages of reasoning and interpretability of symbolic representation and robust learning in neural networks.
arXiv Detail & Related papers (2022-10-28T04:38:10Z) - Knowledge-Aware Bayesian Deep Topic Model [50.58975785318575]
We propose a Bayesian generative model for incorporating prior domain knowledge into hierarchical topic modeling.
Our proposed model efficiently integrates the prior knowledge and improves both hierarchical topic discovery and document representation.
arXiv Detail & Related papers (2022-09-20T09:16:05Z) - Neural Architecture Search for Dense Prediction Tasks in Computer Vision [74.9839082859151]
Deep learning has led to a rising demand for neural network architecture engineering.
neural architecture search (NAS) aims at automatically designing neural network architectures in a data-driven manner rather than manually.
NAS has become applicable to a much wider range of problems in computer vision.
arXiv Detail & Related papers (2022-02-15T08:06:50Z) - Neural Attention Models in Deep Learning: Survey and Taxonomy [0.0]
Concepts and functions of attention have been studied in philosophy, psychology, neuroscience, and computing.
Many different neural attention models are now available and have been a very active research area over the past six years.
Here we propose a taxonomy that corroborates with theoretical aspects that predate Deep Learning.
arXiv Detail & Related papers (2021-12-11T03:35:33Z) - Neural Fields in Visual Computing and Beyond [54.950885364735804]
Recent advances in machine learning have created increasing interest in solving visual computing problems using coordinate-based neural networks.
neural fields have seen successful application in the synthesis of 3D shapes and image, animation of human bodies, 3D reconstruction, and pose estimation.
This report provides context, mathematical grounding, and an extensive review of literature on neural fields.
arXiv Detail & Related papers (2021-11-22T18:57:51Z) - Attention, please! A survey of Neural Attention Models in Deep Learning [0.0]
State-of-the-art in Deep Learning is represented by neural attention models in several application domains.
This survey provides a comprehensive overview and analysis of developments in neural attention models.
arXiv Detail & Related papers (2021-03-31T02:42:28Z) - Positioning yourself in the maze of Neural Text Generation: A
Task-Agnostic Survey [54.34370423151014]
This paper surveys the components of modeling approaches relaying task impacts across various generation tasks such as storytelling, summarization, translation etc.
We present an abstraction of the imperative techniques with respect to learning paradigms, pretraining, modeling approaches, decoding and the key challenges outstanding in the field in each of them.
arXiv Detail & Related papers (2020-10-14T17:54:42Z) - Artificial neural networks for neuroscientists: A primer [4.771833920251869]
Artificial neural networks (ANNs) are essential tools in machine learning that have drawn increasing attention in neuroscience.
In this pedagogical Primer, we introduce ANNs and demonstrate how they have been fruitfully deployed to study neuroscientific questions.
With a focus on bringing this mathematical framework closer to neurobiology, we detail how to customize the analysis, structure, and learning of ANNs.
arXiv Detail & Related papers (2020-06-01T15:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.