Deep Neural Network Based Relation Extraction: An Overview
- URL: http://arxiv.org/abs/2101.01907v2
- Date: Sun, 7 Feb 2021 19:13:54 GMT
- Title: Deep Neural Network Based Relation Extraction: An Overview
- Authors: Hailin Wang, Ke Qin, Rufai Yusuf Zakari, Guoming Lu, Jin Yin
- Abstract summary: Relation Extraction (RE) plays a vital role in Natural Language Processing (NLP)
Its purpose is to identify semantic relations between entities from natural language text.
Deep Neural Networks (DNNs) are the most popular and reliable solutions for RE.
- Score: 2.8436446946726552
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Knowledge is a formal way of understanding the world, providing a human-level
cognition and intelligence for the next-generation artificial intelligence
(AI). One of the representations of knowledge is semantic relations between
entities. An effective way to automatically acquire this important knowledge,
called Relation Extraction (RE), a sub-task of information extraction, plays a
vital role in Natural Language Processing (NLP). Its purpose is to identify
semantic relations between entities from natural language text. To date, there
are several studies for RE in previous works, which have documented these
techniques based on Deep Neural Networks (DNNs) become a prevailing technique
in this research. Especially, the supervised and distant supervision methods
based on DNNs are the most popular and reliable solutions for RE. This article
1) introduces some general concepts, and further 2) gives a comprehensive
overview of DNNs in RE from two points of view: supervised RE, which attempts
to improve the standard RE systems, and distant supervision RE, which adopts
DNNs to design sentence encoder and de-noise method. We further 3) cover some
novel methods and recent trends as well as discuss possible future research
directions for this task.
Related papers
- Unveiling Ontological Commitment in Multi-Modal Foundation Models [7.485653059927206]
Deep neural networks (DNNs) automatically learn rich representations of concepts and respective reasoning.
We propose a method that extracts the learned superclass hierarchy from a multimodal DNN for a given set of leaf concepts.
An initial evaluation study shows that meaningful ontological class hierarchies can be extracted from state-of-the-art foundation models.
arXiv Detail & Related papers (2024-09-25T17:24:27Z) - Topological Representations of Heterogeneous Learning Dynamics of Recurrent Spiking Neural Networks [16.60622265961373]
Spiking Neural Networks (SNNs) have become an essential paradigm in neuroscience and artificial intelligence.
Recent advances in literature have studied the network representations of deep neural networks.
arXiv Detail & Related papers (2024-03-19T05:37:26Z) - A Comprehensive Survey on Relation Extraction: Recent Advances and New Frontiers [76.51245425667845]
Relation extraction (RE) involves identifying the relations between entities from underlying content.
Deep neural networks have dominated the field of RE and made noticeable progress.
This survey is expected to facilitate researchers' collaborative efforts to address the challenges of real-world RE systems.
arXiv Detail & Related papers (2023-06-03T08:39:25Z) - Knowledge Enhanced Neural Networks for relational domains [83.9217787335878]
We focus on a specific method, KENN, a Neural-Symbolic architecture that injects prior logical knowledge into a neural network.
In this paper, we propose an extension of KENN for relational data.
arXiv Detail & Related papers (2022-05-31T13:00:34Z) - exploRNN: Understanding Recurrent Neural Networks through Visual
Exploration [6.006493809079212]
recurrent neural networks (RNNs) are capable of processing sequential data.
We propose exploRNN, the first interactively explorable educational visualization for RNNs.
We provide an overview of the training process of RNNs at a coarse level, while also allowing detailed inspection of the data-flow within LSTM cells.
arXiv Detail & Related papers (2020-12-09T15:06:01Z) - Learning from Context or Names? An Empirical Study on Neural Relation
Extraction [112.06614505580501]
We study the effect of two main information sources in text: textual context and entity mentions (names)
We propose an entity-masked contrastive pre-training framework for relation extraction (RE)
Our framework can improve the effectiveness and robustness of neural models in different RE scenarios.
arXiv Detail & Related papers (2020-10-05T11:21:59Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - Architecture Disentanglement for Deep Neural Networks [174.16176919145377]
We introduce neural architecture disentanglement (NAD) to explain the inner workings of deep neural networks (DNNs)
NAD learns to disentangle a pre-trained DNN into sub-architectures according to independent tasks, forming information flows that describe the inference processes.
Results show that misclassified images have a high probability of being assigned to task sub-architectures similar to the correct ones.
arXiv Detail & Related papers (2020-03-30T08:34:33Z) - DEPARA: Deep Attribution Graph for Deep Knowledge Transferability [91.06106524522237]
We propose the DEeP Attribution gRAph (DEPARA) to investigate the transferability of knowledge learned from PR-DNNs.
In DEPARA, nodes correspond to the inputs and are represented by their vectorized attribution maps with regards to the outputs of the PR-DNN.
The knowledge transferability of two PR-DNNs is measured by the similarity of their corresponding DEPARAs.
arXiv Detail & Related papers (2020-03-17T02:07:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.