Knowledge-augmented Deep Learning and Its Applications: A Survey
- URL: http://arxiv.org/abs/2212.00017v1
- Date: Wed, 30 Nov 2022 03:44:15 GMT
- Title: Knowledge-augmented Deep Learning and Its Applications: A Survey
- Authors: Zijun Cui, Tian Gao, Kartik Talamadupula, and Qiang Ji
- Abstract summary: knowledge-augmented deep learning (KADL) aims to identify domain knowledge and integrate it into deep models for data-efficient, generalizable, and interpretable deep learning.
This survey subsumes existing works and offers a bird's-eye view of research in the general area of knowledge-augmented deep learning.
- Score: 60.221292040710885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning models, though having achieved great success in many different
fields over the past years, are usually data hungry, fail to perform well on
unseen samples, and lack of interpretability. Various prior knowledge often
exists in the target domain and their use can alleviate the deficiencies with
deep learning. To better mimic the behavior of human brains, different advanced
methods have been proposed to identify domain knowledge and integrate it into
deep models for data-efficient, generalizable, and interpretable deep learning,
which we refer to as knowledge-augmented deep learning (KADL). In this survey,
we define the concept of KADL, and introduce its three major tasks, i.e.,
knowledge identification, knowledge representation, and knowledge integration.
Different from existing surveys that are focused on a specific type of
knowledge, we provide a broad and complete taxonomy of domain knowledge and its
representations. Based on our taxonomy, we provide a systematic review of
existing techniques, different from existing works that survey integration
approaches agnostic to taxonomy of knowledge. This survey subsumes existing
works and offers a bird's-eye view of research in the general area of
knowledge-augmented deep learning. The thorough and critical reviews of
numerous papers help not only understand current progresses but also identify
future directions for the research on knowledge-augmented deep learning.
Related papers
- Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - Deep Learning-Based Knowledge Injection for Metaphor Detection: A
Comprehensive Review [24.968400793968417]
This paper provides a review of research advances in the application of deep learning for knowledge injection in metaphor detection tasks.
We will first systematically summarize and generalize the mainstream knowledge and knowledge injection principles.
Then, the datasets, evaluation metrics, and benchmark models used in metaphor detection tasks are examined.
arXiv Detail & Related papers (2023-08-08T14:51:16Z) - The Life Cycle of Knowledge in Big Language Models: A Survey [39.955688635216056]
Pre-trained language models (PLMs) have raised significant attention about how knowledge can be acquired, maintained, updated and used by language models.
Despite the enormous amount of related studies, there still lacks a unified view of how knowledge circulates within language models throughout the learning, tuning, and application processes.
We revisit PLMs as knowledge-based systems by dividing the life circle of knowledge in PLMs into five critical periods, and investigating how knowledge circulates when it is built, maintained and used.
arXiv Detail & Related papers (2023-03-14T03:49:22Z) - A Unified End-to-End Retriever-Reader Framework for Knowledge-based VQA [67.75989848202343]
This paper presents a unified end-to-end retriever-reader framework towards knowledge-based VQA.
We shed light on the multi-modal implicit knowledge from vision-language pre-training models to mine its potential in knowledge reasoning.
Our scheme is able to not only provide guidance for knowledge retrieval, but also drop these instances potentially error-prone towards question answering.
arXiv Detail & Related papers (2022-06-30T02:35:04Z) - Transferability in Deep Learning: A Survey [80.67296873915176]
The ability to acquire and reuse knowledge is known as transferability in deep learning.
We present this survey to connect different isolated areas in deep learning with their relation to transferability.
We implement a benchmark and an open-source library, enabling a fair evaluation of deep learning methods in terms of transferability.
arXiv Detail & Related papers (2022-01-15T15:03:17Z) - Towards a Universal Continuous Knowledge Base [49.95342223987143]
We propose a method for building a continuous knowledge base that can store knowledge imported from multiple neural networks.
Experiments on text classification show promising results.
We import the knowledge from multiple models to the knowledge base, from which the fused knowledge is exported back to a single model.
arXiv Detail & Related papers (2020-12-25T12:27:44Z) - A Survey on Knowledge Graphs: Representation, Acquisition and
Applications [89.78089494738002]
We review research topics about 1) knowledge graph representation learning, 2) knowledge acquisition and completion, 3) temporal knowledge graph, and 4) knowledge-aware applications.
For knowledge acquisition, especially knowledge graph completion, embedding methods, path inference, and logical rule reasoning, are reviewed.
We explore several emerging topics, including meta learning, commonsense reasoning, and temporal knowledge graphs.
arXiv Detail & Related papers (2020-02-02T13:17:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.