Worth of knowledge in deep learning
- URL: http://arxiv.org/abs/2307.00712v1
- Date: Mon, 3 Jul 2023 02:25:19 GMT
- Title: Worth of knowledge in deep learning
- Authors: Hao Xu, Yuntian Chen, Dongxiao Zhang
- Abstract summary: We present a framework inspired by interpretable machine learning to evaluate the worth of knowledge.
Our findings elucidate the complex relationship between data and knowledge, including dependence, synergistic, and substitution effects.
Our model-agnostic framework can be applied to a variety of common network architectures, providing a comprehensive understanding of the role of prior knowledge in deep learning models.
- Score: 3.132595571344153
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Knowledge constitutes the accumulated understanding and experience that
humans use to gain insight into the world. In deep learning, prior knowledge is
essential for mitigating shortcomings of data-driven models, such as data
dependence, generalization ability, and compliance with constraints. To enable
efficient evaluation of the worth of knowledge, we present a framework inspired
by interpretable machine learning. Through quantitative experiments, we assess
the influence of data volume and estimation range on the worth of knowledge.
Our findings elucidate the complex relationship between data and knowledge,
including dependence, synergistic, and substitution effects. Our model-agnostic
framework can be applied to a variety of common network architectures,
providing a comprehensive understanding of the role of prior knowledge in deep
learning models. It can also be used to improve the performance of informed
machine learning, as well as distinguish improper prior knowledge.
Related papers
- Large Language Models are Limited in Out-of-Context Knowledge Reasoning [65.72847298578071]
Large Language Models (LLMs) possess extensive knowledge and strong capabilities in performing in-context reasoning.
This paper focuses on a significant aspect of out-of-context reasoning: Out-of-Context Knowledge Reasoning (OCKR), which is to combine multiple knowledge to infer new knowledge.
arXiv Detail & Related papers (2024-06-11T15:58:59Z) - Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - Learning by Applying: A General Framework for Mathematical Reasoning via
Enhancing Explicit Knowledge Learning [47.96987739801807]
We propose a framework to enhance existing models (backbones) in a principled way by explicit knowledge learning.
In LeAp, we perform knowledge learning in a novel problem-knowledge-expression paradigm.
We show that LeAp improves all backbones' performances, learns accurate knowledge, and achieves a more interpretable reasoning process.
arXiv Detail & Related papers (2023-02-11T15:15:41Z) - Knowledge-augmented Deep Learning and Its Applications: A Survey [60.221292040710885]
knowledge-augmented deep learning (KADL) aims to identify domain knowledge and integrate it into deep models for data-efficient, generalizable, and interpretable deep learning.
This survey subsumes existing works and offers a bird's-eye view of research in the general area of knowledge-augmented deep learning.
arXiv Detail & Related papers (2022-11-30T03:44:15Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - Informed Learning by Wide Neural Networks: Convergence, Generalization
and Sampling Complexity [27.84415856657607]
We study how and why domain knowledge benefits the performance of informed learning.
We propose a generalized informed training objective to better exploit the benefits of knowledge and balance the label and knowledge imperfectness.
arXiv Detail & Related papers (2022-07-02T06:28:25Z) - Transferability in Deep Learning: A Survey [80.67296873915176]
The ability to acquire and reuse knowledge is known as transferability in deep learning.
We present this survey to connect different isolated areas in deep learning with their relation to transferability.
We implement a benchmark and an open-source library, enabling a fair evaluation of deep learning methods in terms of transferability.
arXiv Detail & Related papers (2022-01-15T15:03:17Z) - Knowledge Modelling and Active Learning in Manufacturing [0.6299766708197884]
Ontologies and Knowledge Graphs provide means to model and relate a wide range of concepts, problems, and configurations.
Both can be used to generate new knowledge through deductive inference and identify missing knowledge.
Active learning can be used to identify the most informative data instances for which to obtain users' feedback, reduce friction, and maximize knowledge acquisition.
arXiv Detail & Related papers (2021-07-05T22:07:21Z) - A Quantitative Perspective on Values of Domain Knowledge for Machine
Learning [27.84415856657607]
Domain knowledge in various forms has been playing a crucial role in improving the learning performance.
We study the problem of quantifying the values of domain knowledge in terms of its contribution to the learning performance.
arXiv Detail & Related papers (2020-11-17T06:12:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.