Solving the 2D Advection-Diffusion Equation using Fixed-Depth Symbolic Regression and Symbolic Differentiation without Expression Trees
- URL: http://arxiv.org/abs/2411.00011v2
- Date: Mon, 11 Nov 2024 03:34:46 GMT
- Title: Solving the 2D Advection-Diffusion Equation using Fixed-Depth Symbolic Regression and Symbolic Differentiation without Expression Trees
- Authors: Edward Finkelstein,
- Abstract summary: This paper presents a novel method for solving the 2D advection-diffusion equation using fixed-depth symbolic regression and symbolic differentiation without expression trees.
It is applied to two cases with distinct initial and boundary conditions, demonstrating its accuracy and ability to find approximate solutions efficiently.
- Score: 0.0
- License:
- Abstract: This paper presents a novel method for solving the 2D advection-diffusion equation using fixed-depth symbolic regression and symbolic differentiation without expression trees. The method is applied to two cases with distinct initial and boundary conditions, demonstrating its accuracy and ability to find approximate solutions efficiently. This framework offers a promising, scalable solution for finding approximate solutions to differential equations, with the potential for future improvements in computational performance and applicability to more complex systems involving vector-valued objectives.
Related papers
- Neuro-Symbolic AI for Analytical Solutions of Differential Equations [11.177091143370466]
We present an approach to find analytical solutions of differential equations using a neuro-symbolic AI framework.
This integration unifies numerical and symbolic differential equation solvers via a neuro-symbolic AI framework.
We show advantages over commercial solvers, symbolic methods, and approximate neural networks on a diverse set of problems.
arXiv Detail & Related papers (2025-02-03T16:06:56Z) - A Physics-Informed Machine Learning Approach for Solving Distributed Order Fractional Differential Equations [0.0]
This paper introduces a novel methodology for solving distributed-order fractional differential equations using a physics-informed machine learning framework.
By embedding the distributed-order functional equation into the SVR framework, we incorporate physical laws directly into the learning process.
The effectiveness of the proposed approach is validated through a series of numerical experiments on Caputo-based distributed-order fractional differential equations.
arXiv Detail & Related papers (2024-09-05T13:20:10Z) - Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - Enhancing Low-Order Discontinuous Galerkin Methods with Neural Ordinary Differential Equations for Compressible Navier--Stokes Equations [0.1578515540930834]
We introduce an end-to-end differentiable framework for solving the compressible Navier-Stokes equations.
This integrated approach combines a differentiable discontinuous Galerkin solver with a neural network source term.
We demonstrate the performance of the proposed framework through two examples.
arXiv Detail & Related papers (2023-10-29T04:26:23Z) - Comparison of Single- and Multi- Objective Optimization Quality for
Evolutionary Equation Discovery [77.34726150561087]
Evolutionary differential equation discovery proved to be a tool to obtain equations with less a priori assumptions.
The proposed comparison approach is shown on classical model examples -- Burgers equation, wave equation, and Korteweg - de Vries equation.
arXiv Detail & Related papers (2023-06-29T15:37:19Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Symbolic Recovery of Differential Equations: The Identifiability Problem [52.158782751264205]
Symbolic recovery of differential equations is the ambitious attempt at automating the derivation of governing equations.
We provide both necessary and sufficient conditions for a function to uniquely determine the corresponding differential equation.
We then use our results to devise numerical algorithms aiming to determine whether a function solves a differential equation uniquely.
arXiv Detail & Related papers (2022-10-15T17:32:49Z) - AI-enhanced iterative solvers for accelerating the solution of large
scale parametrized linear systems of equations [0.0]
This paper exploits up-to-date ML tools and delivers customized iterative solvers of linear equation systems.
The results indicate its superiority over conventional iterative solution schemes.
arXiv Detail & Related papers (2022-07-06T09:47:14Z) - An application of the splitting-up method for the computation of a
neural network representation for the solution for the filtering equations [68.8204255655161]
Filtering equations play a central role in many real-life applications, including numerical weather prediction, finance and engineering.
One of the classical approaches to approximate the solution of the filtering equations is to use a PDE inspired method, called the splitting-up method.
We combine this method with a neural network representation to produce an approximation of the unnormalised conditional distribution of the signal process.
arXiv Detail & Related papers (2022-01-10T11:01:36Z) - Last-Iterate Convergence of Saddle-Point Optimizers via High-Resolution
Differential Equations [83.3201889218775]
Several widely-used first-order saddle-point optimization methods yield an identical continuous-time ordinary differential equation (ODE) when derived naively.
However, the convergence properties of these methods are qualitatively different, even on simple bilinear games.
We adopt a framework studied in fluid dynamics to design differential equation models for several saddle-point optimization methods.
arXiv Detail & Related papers (2021-12-27T18:31:34Z) - Scalable Gradients for Stochastic Differential Equations [40.70998833051251]
adjoint sensitivity method scalably computes gradients of ordinary differential equations.
We generalize this method to differential equations, allowing time-efficient and constant-memory computation.
We use our method to fit neural dynamics defined by networks, achieving competitive performance on a 50-dimensional motion capture dataset.
arXiv Detail & Related papers (2020-01-05T23:05:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.