Attention, please! A survey of Neural Attention Models in Deep Learning
- URL: http://arxiv.org/abs/2103.16775v1
- Date: Wed, 31 Mar 2021 02:42:28 GMT
- Title: Attention, please! A survey of Neural Attention Models in Deep Learning
- Authors: Alana de Santana Correia, Esther Luna Colombini
- Abstract summary: State-of-the-art in Deep Learning is represented by neural attention models in several application domains.
This survey provides a comprehensive overview and analysis of developments in neural attention models.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In humans, Attention is a core property of all perceptual and cognitive
operations. Given our limited ability to process competing sources, attention
mechanisms select, modulate, and focus on the information most relevant to
behavior. For decades, concepts and functions of attention have been studied in
philosophy, psychology, neuroscience, and computing. For the last six years,
this property has been widely explored in deep neural networks. Currently, the
state-of-the-art in Deep Learning is represented by neural attention models in
several application domains. This survey provides a comprehensive overview and
analysis of developments in neural attention models. We systematically reviewed
hundreds of architectures in the area, identifying and discussing those in
which attention has shown a significant impact. We also developed and made
public an automated methodology to facilitate the development of reviews in the
area. By critically analyzing 650 works, we describe the primary uses of
attention in convolutional, recurrent networks and generative models,
identifying common subgroups of uses and applications. Furthermore, we describe
the impact of attention in different application domains and their impact on
neural networks' interpretability. Finally, we list possible trends and
opportunities for further research, hoping that this review will provide a
succinct overview of the main attentional models in the area and guide
researchers in developing future approaches that will drive further
improvements.
Related papers
- Trends, Applications, and Challenges in Human Attention Modelling [65.61554471033844]
Human attention modelling has proven to be particularly useful for understanding the cognitive processes underlying visual exploration.
It provides support to artificial intelligence models that aim to solve problems in various domains, including image and video processing, vision-and-language applications, and language modelling.
arXiv Detail & Related papers (2024-02-28T19:35:30Z) - Towards Data-and Knowledge-Driven Artificial Intelligence: A Survey on Neuro-Symbolic Computing [73.0977635031713]
Neural-symbolic computing (NeSy) has been an active research area of Artificial Intelligence (AI) for many years.
NeSy shows promise of reconciling the advantages of reasoning and interpretability of symbolic representation and robust learning in neural networks.
arXiv Detail & Related papers (2022-10-28T04:38:10Z) - BI AVAN: Brain inspired Adversarial Visual Attention Network [67.05560966998559]
We propose a brain-inspired adversarial visual attention network (BI-AVAN) to characterize human visual attention directly from functional brain activity.
Our model imitates the biased competition process between attention-related/neglected objects to identify and locate the visual objects in a movie frame the human brain focuses on in an unsupervised manner.
arXiv Detail & Related papers (2022-10-27T22:20:36Z) - Survey on Applications of Neurosymbolic Artificial Intelligence [37.7665470475176]
We introduce a taxonomy of common Neurosymbolic applications and summarize the state-of-the-art for each of those domains.
We identify important current trends and provide new perspectives pertaining to the future of this burgeoning field.
arXiv Detail & Related papers (2022-09-08T18:18:41Z) - Attention Mechanism in Neural Networks: Where it Comes and Where it Goes [0.0]
A long time ago in the machine learning literature, the idea of incorporating a mechanism inspired by the human visual system into neural networks was introduced.
This study aims to provide a road map for researchers to explore the current development and get inspired for novel approaches beyond the attention.
arXiv Detail & Related papers (2022-04-27T19:29:09Z) - Visual Attention Methods in Deep Learning: An In-Depth Survey [37.18104595529633]
Inspired by the human cognitive system, attention is a mechanism that imitates the human cognitive awareness about specific information.
Deep learning has employed attention to boost performance for many applications.
The literature lacks a comprehensive survey on attention techniques to guide researchers in employing attention in their deep models.
arXiv Detail & Related papers (2022-04-16T08:57:00Z) - Neural Attention Models in Deep Learning: Survey and Taxonomy [0.0]
Concepts and functions of attention have been studied in philosophy, psychology, neuroscience, and computing.
Many different neural attention models are now available and have been a very active research area over the past six years.
Here we propose a taxonomy that corroborates with theoretical aspects that predate Deep Learning.
arXiv Detail & Related papers (2021-12-11T03:35:33Z) - Neural Fields in Visual Computing and Beyond [54.950885364735804]
Recent advances in machine learning have created increasing interest in solving visual computing problems using coordinate-based neural networks.
neural fields have seen successful application in the synthesis of 3D shapes and image, animation of human bodies, 3D reconstruction, and pose estimation.
This report provides context, mathematical grounding, and an extensive review of literature on neural fields.
arXiv Detail & Related papers (2021-11-22T18:57:51Z) - Affect Analysis in-the-wild: Valence-Arousal, Expressions, Action Units
and a Unified Framework [83.21732533130846]
The paper focuses on large in-the-wild databases, i.e., Aff-Wild and Aff-Wild2.
It presents the design of two classes of deep neural networks trained with these databases.
A novel multi-task and holistic framework is presented which is able to jointly learn and effectively generalize and perform affect recognition.
arXiv Detail & Related papers (2021-03-29T17:36:20Z) - Topic Modelling Meets Deep Neural Networks: A Survey [25.950652301810425]
Topic modelling has been a successful technique for text analysis for almost twenty years.
When topic modelling met deep neural networks, there emerged a new and increasingly popular research area, neural topic models.
This paper provides a focused yet comprehensive overview of neural topic models for interested researchers in the AI community.
arXiv Detail & Related papers (2021-02-28T12:59:28Z) - Deep Reinforced Attention Learning for Quality-Aware Visual Recognition [73.15276998621582]
We build upon the weakly-supervised generation mechanism of intermediate attention maps in any convolutional neural networks.
We introduce a meta critic network to evaluate the quality of attention maps in the main network.
arXiv Detail & Related papers (2020-07-13T02:44:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.