Learning by Applying: A General Framework for Mathematical Reasoning via
Enhancing Explicit Knowledge Learning
- URL: http://arxiv.org/abs/2302.05717v1
- Date: Sat, 11 Feb 2023 15:15:41 GMT
- Title: Learning by Applying: A General Framework for Mathematical Reasoning via
Enhancing Explicit Knowledge Learning
- Authors: Jiayu Liu, Zhenya Huang, Chengxiang Zhai, Qi Liu
- Abstract summary: We propose a framework to enhance existing models (backbones) in a principled way by explicit knowledge learning.
In LeAp, we perform knowledge learning in a novel problem-knowledge-expression paradigm.
We show that LeAp improves all backbones' performances, learns accurate knowledge, and achieves a more interpretable reasoning process.
- Score: 47.96987739801807
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mathematical reasoning is one of the crucial abilities of general artificial
intelligence, which requires machines to master mathematical logic and
knowledge from solving problems. However, existing approaches are not
transparent (thus not interpretable) in terms of what knowledge has been
learned and applied in the reasoning process. In this paper, we propose a
general Learning by Applying (LeAp) framework to enhance existing models
(backbones) in a principled way by explicit knowledge learning. In LeAp, we
perform knowledge learning in a novel problem-knowledge-expression paradigm,
with a Knowledge Encoder to acquire knowledge from problem data and a Knowledge
Decoder to apply knowledge for expression reasoning. The learned mathematical
knowledge, including word-word relations and word-operator relations, forms an
explicit knowledge graph, which bridges the knowledge "learning" and "applying"
organically. Moreover, for problem solving, we design a semantics-enhanced
module and a reasoning-enhanced module that apply knowledge to improve the
problem comprehension and symbol reasoning abilities of any backbone,
respectively. We theoretically prove the superiority of LeAp's autonomous
learning mechanism. Experiments on three real-world datasets show that LeAp
improves all backbones' performances, learns accurate knowledge, and achieves a
more interpretable reasoning process.
Related papers
- Knowledge Mechanisms in Large Language Models: A Survey and Perspective [88.51320482620679]
This paper reviews knowledge mechanism analysis from a novel taxonomy including knowledge utilization and evolution.
We discuss what knowledge LLMs have learned, the reasons for the fragility of parametric knowledge, and the potential dark knowledge (hypothesis) that will be challenging to address.
arXiv Detail & Related papers (2024-07-22T06:15:59Z) - Chain-of-Knowledge: Integrating Knowledge Reasoning into Large Language Models by Learning from Knowledge Graphs [55.317267269115845]
Chain-of-Knowledge (CoK) is a comprehensive framework for knowledge reasoning.
CoK includes methodologies for both dataset construction and model learning.
We conduct extensive experiments with KnowReason.
arXiv Detail & Related papers (2024-06-30T10:49:32Z) - Representing Pedagogic Content Knowledge Through Rough Sets [0.0]
The paper is meant for rough set researchers intending to build logical models or develop meaning-aware AI-software to aid teachers.
The main advantage of the proposed approach is in its ability to coherently handle vagueness, multi-modality.
arXiv Detail & Related papers (2024-02-26T11:00:45Z) - Learning principle and mathematical realization of the learning
mechanism in the brain [0.0]
We call it learning principle, and it follows that all learning is equivalent to estimating the probability of input data.
We show that conventional supervised learning is equivalent to estimating conditional probabilities, and succeeded in making supervised learning more effective and generalized.
We propose a new method of defining the values of estimated probability using differentiation, and show that unsupervised learning can be performed on arbitrary dataset without any prior knowledge.
arXiv Detail & Related papers (2023-11-22T12:08:01Z) - Worth of knowledge in deep learning [3.132595571344153]
We present a framework inspired by interpretable machine learning to evaluate the worth of knowledge.
Our findings elucidate the complex relationship between data and knowledge, including dependence, synergistic, and substitution effects.
Our model-agnostic framework can be applied to a variety of common network architectures, providing a comprehensive understanding of the role of prior knowledge in deep learning models.
arXiv Detail & Related papers (2023-07-03T02:25:19Z) - Generated Knowledge Prompting for Commonsense Reasoning [53.88983683513114]
We propose generating knowledge statements directly from a language model with a generic prompt format.
This approach improves performance of both off-the-shelf and finetuned language models on four commonsense reasoning tasks.
Notably, we find that a model's predictions can improve when using its own generated knowledge.
arXiv Detail & Related papers (2021-10-15T21:58:03Z) - Incremental Knowledge Based Question Answering [52.041815783025186]
We propose a new incremental KBQA learning framework that can progressively expand learning capacity as humans do.
Specifically, it comprises a margin-distilled loss and a collaborative selection method, to overcome the catastrophic forgetting problem.
The comprehensive experiments demonstrate its effectiveness and efficiency when working with the evolving knowledge base.
arXiv Detail & Related papers (2021-01-18T09:03:38Z) - Towards a Universal Continuous Knowledge Base [49.95342223987143]
We propose a method for building a continuous knowledge base that can store knowledge imported from multiple neural networks.
Experiments on text classification show promising results.
We import the knowledge from multiple models to the knowledge base, from which the fused knowledge is exported back to a single model.
arXiv Detail & Related papers (2020-12-25T12:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.