Automated Discovery of Integral with Deep Learning
- URL: http://arxiv.org/abs/2402.18040v1
- Date: Wed, 28 Feb 2024 04:34:15 GMT
- Title: Automated Discovery of Integral with Deep Learning
- Authors: Xiaoxin Yin
- Abstract summary: We show that deep learning models can approach the task of inferring integrals either through a sequence-to-sequence model, or by uncovering the rudimentary principles of integration.
Our experiments show that deep learning models can approach the task of inferring integrals either through a sequence-to-sequence model, or by uncovering the rudimentary principles of integration.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advancements in the realm of deep learning, particularly in the
development of large language models (LLMs), have demonstrated AI's ability to
tackle complex mathematical problems or solving programming challenges.
However, the capability to solve well-defined problems based on extensive
training data differs significantly from the nuanced process of making
scientific discoveries. Trained on almost all human knowledge available,
today's sophisticated LLMs basically learn to predict sequences of tokens. They
generate mathematical derivations and write code in a similar way as writing an
essay, and do not have the ability to pioneer scientific discoveries in the
manner a human scientist would do.
In this study we delve into the potential of using deep learning to
rediscover a fundamental mathematical concept: integrals. By defining integrals
as area under the curve, we illustrate how AI can deduce the integral of a
given function, exemplified by inferring $\int_{0}^{x} t^2 dt = \frac{x^3}{3}$
and $\int_{0}^{x} ae^{bt} dt = \frac{a}{b} e^{bx} - \frac{a}{b}$. Our
experiments show that deep learning models can approach the task of inferring
integrals either through a sequence-to-sequence model, akin to language
translation, or by uncovering the rudimentary principles of integration, such
as $\int_{0}^{x} t^n dt = \frac{x^{n+1}}{n+1}$.
Related papers
- Formal Mathematical Reasoning: A New Frontier in AI [60.26950681543385]
We advocate for formal mathematical reasoning and argue that it is indispensable for advancing AI4Math to the next level.
We summarize existing progress, discuss open challenges, and envision critical milestones to measure future success.
arXiv Detail & Related papers (2024-12-20T17:19:24Z) - Data for Mathematical Copilots: Better Ways of Presenting Proofs for Machine Learning [85.635988711588]
We argue that enhancing the capabilities of large language models requires a paradigm shift in the design of mathematical datasets.
We advocate for mathematical dataset developers to consider the concept of "motivated proof", introduced by G. P'olya in 1949, which can serve as a blueprint for datasets that offer a better proof learning signal.
We provide a questionnaire designed specifically for math datasets that we urge creators to include with their datasets.
arXiv Detail & Related papers (2024-12-19T18:55:17Z) - Math Agents: Computational Infrastructure, Mathematical Embedding, and
Genomics [0.0]
Beyond human-AI chat, large language models (LLMs) are emerging in programming, algorithm discovery, and theorem proving.
This project introduces Math Agents and mathematical embedding as fresh entries to the "Moore's Law of Mathematics"
Project aims to use Math Agents and mathematical embeddings to address the ageing issue in information systems biology.
arXiv Detail & Related papers (2023-07-04T20:16:32Z) - A Survey of Deep Learning for Mathematical Reasoning [71.88150173381153]
We review the key tasks, datasets, and methods at the intersection of mathematical reasoning and deep learning over the past decade.
Recent advances in large-scale neural language models have opened up new benchmarks and opportunities to use deep learning for mathematical reasoning.
arXiv Detail & Related papers (2022-12-20T18:46:16Z) - Neural Integral Equations [3.087238735145305]
We introduce a method for learning unknown integral operators from data using an IE solver.
We also present Attentional Neural Integral Equations (ANIE), which replaces the integral with self-attention.
arXiv Detail & Related papers (2022-09-30T02:32:17Z) - JiuZhang: A Chinese Pre-trained Language Model for Mathematical Problem
Understanding [74.12405417718054]
This paper aims to advance the mathematical intelligence of machines by presenting the first Chinese mathematical pre-trained language model(PLM)
Unlike other standard NLP tasks, mathematical texts are difficult to understand, since they involve mathematical terminology, symbols and formulas in the problem statement.
We design a novel curriculum pre-training approach for improving the learning of mathematical PLMs, consisting of both basic and advanced courses.
arXiv Detail & Related papers (2022-06-13T17:03:52Z) - AutoIP: A United Framework to Integrate Physics into Gaussian Processes [15.108333340471034]
We propose a framework that can integrate all kinds of differential equations into Gaussian processes.
Our method shows improvement upon vanilla GPs in both simulation and several real-world applications.
arXiv Detail & Related papers (2022-02-24T19:02:14Z) - Abstraction, Reasoning and Deep Learning: A Study of the "Look and Say"
Sequence [0.0]
Deep neural networks can exhibit high competence' (as measured by accuracy) when trained on large data sets.
We report on two sets experiments on the Look and Say" puzzle data.
Despite the amazing accuracy (on both, training and test data), the performance of the trained programs on the actual L&S sequence is bad.
arXiv Detail & Related papers (2021-09-27T01:41:37Z) - Learning to extrapolate using continued fractions: Predicting the
critical temperature of superconductor materials [5.905364646955811]
In the field of Artificial Intelligence (AI) and Machine Learning (ML), the approximation of unknown target functions $y=f(mathbfx)$ is a common objective.
We refer to $S$ as the training set and aim to identify a low-complexity mathematical model that can effectively approximate this target function for new instances $mathbfx$.
arXiv Detail & Related papers (2020-11-27T04:57:40Z) - On Function Approximation in Reinforcement Learning: Optimism in the
Face of Large State Spaces [208.67848059021915]
We study the exploration-exploitation tradeoff at the core of reinforcement learning.
In particular, we prove that the complexity of the function class $mathcalF$ characterizes the complexity of the function.
Our regret bounds are independent of the number of episodes.
arXiv Detail & Related papers (2020-11-09T18:32:22Z) - The data-driven physical-based equations discovery using evolutionary
approach [77.34726150561087]
We describe the algorithm for the mathematical equations discovery from the given observations data.
The algorithm combines genetic programming with the sparse regression.
It could be used for governing analytical equation discovery as well as for partial differential equations (PDE) discovery.
arXiv Detail & Related papers (2020-04-03T17:21:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.