Integration of knowledge and data in machine learning
- URL: http://arxiv.org/abs/2202.10337v1
- Date: Tue, 15 Feb 2022 10:35:53 GMT
- Title: Integration of knowledge and data in machine learning
- Authors: Yuntian Chen, Dongxiao Zhang
- Abstract summary: Through knowledge embedding, barriers between knowledge and data can be broken, and machine learning models with physical common sense can be formed.
Knowledge discovery takes advantage of machine learning to extract new knowledge from observations.
This study not only summarizes and analyzes the existing literature, but also proposes research gaps and future opportunities.
- Score: 0.456877715768796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Scientific research's duty and goal is to comprehend and explore the world,
as well as to modify it based on experience and knowledge. Knowledge embedding
and knowledge discovery are two significant methods of integrating knowledge
and data. Through knowledge embedding, the barriers between knowledge and data
can be broken, and machine learning models with physical common sense can be
formed. Meanwhile, humans' understanding of the world is always limited, and
knowledge discovery takes advantage of machine learning to extract new
knowledge from observations. Not only may knowledge discovery help researchers
better grasp the nature of physics, but it can also help them conduct knowledge
embedding research. A closed loop of knowledge generation and usage are formed
by combining knowledge embedding with knowledge discovery, which can improve
the robustness and accuracy of the model and uncover unknown scientific
principles. This study not only summarizes and analyzes the existing
literature, but also proposes research gaps and future opportunities.
Related papers
- Knowledge Mechanisms in Large Language Models: A Survey and Perspective [88.51320482620679]
This paper reviews knowledge mechanism analysis from a novel taxonomy including knowledge utilization and evolution.
We discuss what knowledge LLMs have learned, the reasons for the fragility of parametric knowledge, and the potential dark knowledge (hypothesis) that will be challenging to address.
arXiv Detail & Related papers (2024-07-22T06:15:59Z) - Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - From task structures to world models: What do LLMs know? [0.0]
In what sense does a large language model have knowledge?
We answer by granting LLMs "instrumental knowledge"; knowledge defined by a certain set of abilities.
We then ask how such knowledge is related to the more ordinary, "worldly" knowledge exhibited by human agents, and explore this in terms of the degree to which instrumental knowledge can be said to incorporate the structured world models of cognitive science.
arXiv Detail & Related papers (2023-10-06T14:21:59Z) - Worth of knowledge in deep learning [3.132595571344153]
We present a framework inspired by interpretable machine learning to evaluate the worth of knowledge.
Our findings elucidate the complex relationship between data and knowledge, including dependence, synergistic, and substitution effects.
Our model-agnostic framework can be applied to a variety of common network architectures, providing a comprehensive understanding of the role of prior knowledge in deep learning models.
arXiv Detail & Related papers (2023-07-03T02:25:19Z) - Learning by Applying: A General Framework for Mathematical Reasoning via
Enhancing Explicit Knowledge Learning [47.96987739801807]
We propose a framework to enhance existing models (backbones) in a principled way by explicit knowledge learning.
In LeAp, we perform knowledge learning in a novel problem-knowledge-expression paradigm.
We show that LeAp improves all backbones' performances, learns accurate knowledge, and achieves a more interpretable reasoning process.
arXiv Detail & Related papers (2023-02-11T15:15:41Z) - Knowledge-augmented Deep Learning and Its Applications: A Survey [60.221292040710885]
knowledge-augmented deep learning (KADL) aims to identify domain knowledge and integrate it into deep models for data-efficient, generalizable, and interpretable deep learning.
This survey subsumes existing works and offers a bird's-eye view of research in the general area of knowledge-augmented deep learning.
arXiv Detail & Related papers (2022-11-30T03:44:15Z) - KnowledgeShovel: An AI-in-the-Loop Document Annotation System for
Scientific Knowledge Base Construction [46.56643271476249]
KnowledgeShovel is an Al-in-the-Loop document annotation system for researchers to construct scientific knowledge bases.
The design of KnowledgeShovel introduces a multi-step multi-modalAI collaboration pipeline to improve data accuracy while reducing the human burden.
A follow-up user evaluation with 7 geoscience researchers shows that KnowledgeShovel can enable efficient construction of scientific knowledge bases with satisfactory accuracy.
arXiv Detail & Related papers (2022-10-06T11:38:18Z) - Towards a Universal Continuous Knowledge Base [49.95342223987143]
We propose a method for building a continuous knowledge base that can store knowledge imported from multiple neural networks.
Experiments on text classification show promising results.
We import the knowledge from multiple models to the knowledge base, from which the fused knowledge is exported back to a single model.
arXiv Detail & Related papers (2020-12-25T12:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.