chebgreen: Learning and Interpolating Continuous Empirical Green's Functions from Data
- URL: http://arxiv.org/abs/2501.18715v2
- Date: Wed, 12 Feb 2025 17:41:57 GMT
- Title: chebgreen: Learning and Interpolating Continuous Empirical Green's Functions from Data
- Authors: Harshwardhan Praveen, Jacob Brown, Christopher Earls,
- Abstract summary: We present a mesh-independent, data-driven library, chebgreen, to model one-dimensional systems.
We learn an Empirical Green's Function for the associated, but hidden, boundary value problem.
We uncover the Green's function, at an unseen control parameter value, by interpolating the left and right singular functions within a suitable library.
- Score: 0.0
- License:
- Abstract: In this work, we present a mesh-independent, data-driven library, chebgreen, to mathematically model one-dimensional systems, possessing an associated control parameter, and whose governing partial differential equation is unknown. The proposed method learns an Empirical Green's Function for the associated, but hidden, boundary value problem, in the form of a Rational Neural Network from which we subsequently construct a bivariate representation in a Chebyshev basis. We uncover the Green's function, at an unseen control parameter value, by interpolating the left and right singular functions within a suitable library, expressed as points on a manifold of Quasimatrices, while the associated singular values are interpolated with Lagrange polynomials.
Related papers
- An explainable operator approximation framework under the guideline of Green's function [1.1174586184779578]
We introduce a novel framework, GreensONet, to learn embedded Green's functions and solve PDEs via Green's integral formulation.
The framework's accuracy and generalization ability surpass those of existing methods.
arXiv Detail & Related papers (2024-12-21T14:31:03Z) - Neural Control Variates with Automatic Integration [49.91408797261987]
This paper proposes a novel approach to construct learnable parametric control variates functions from arbitrary neural network architectures.
We use the network to approximate the anti-derivative of the integrand.
We apply our method to solve partial differential equations using the Walk-on-sphere algorithm.
arXiv Detail & Related papers (2024-09-23T06:04:28Z) - Learning Domain-Independent Green's Function For Elliptic Partial
Differential Equations [0.0]
Green's function characterizes a partial differential equation (PDE) and maps its solution in the entire domain as integrals.
We propose a novel boundary integral network to learn the domain-independent Green's function, referred to as BIN-G.
We demonstrate that our numerical scheme enables fast training and accurate evaluation of the Green's function for PDEs with variable coefficients.
arXiv Detail & Related papers (2024-01-30T17:00:22Z) - Symmetric Single Index Learning [46.7352578439663]
One popular model is the single-index model, in which labels are produced by an unknown linear projection with a possibly unknown link function.
We consider single index learning in the setting of symmetric neural networks.
arXiv Detail & Related papers (2023-10-03T14:59:00Z) - Calculating the Single-Particle Many-body Green's Functions via the
Quantum Singular Value Transform Algorithm [0.0]
We implement a noise-free simulation of the technique to investigate how it can be used to perform matrix inversion.
We also propose a new circuit construction for the linear combination of unitaries block encoding technique, that reduces the number of single and two-qubit gates required.
arXiv Detail & Related papers (2023-07-25T15:38:03Z) - Data-driven discovery of Green's functions [0.0]
This thesis introduces theoretical results and deep learning algorithms to learn Green's functions associated with linear partial differential equations.
The construction connects the fields of PDE learning and numerical linear algebra.
Rational neural networks (NNs) are introduced and consist of neural networks with trainable rational activation functions.
arXiv Detail & Related papers (2022-10-28T09:41:50Z) - BI-GreenNet: Learning Green's functions by boundary integral network [14.008606361378149]
Green's function plays a significant role in both theoretical analysis and numerical computing of partial differential equations.
We develop a new method for computing Green's function with high accuracy.
arXiv Detail & Related papers (2022-04-28T01:42:35Z) - Graph-adaptive Rectified Linear Unit for Graph Neural Networks [64.92221119723048]
Graph Neural Networks (GNNs) have achieved remarkable success by extending traditional convolution to learning on non-Euclidean data.
We propose Graph-adaptive Rectified Linear Unit (GReLU) which is a new parametric activation function incorporating the neighborhood information in a novel and efficient way.
We conduct comprehensive experiments to show that our plug-and-play GReLU method is efficient and effective given different GNN backbones and various downstream tasks.
arXiv Detail & Related papers (2022-02-13T10:54:59Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Method of spectral Green functions in driven open quantum dynamics [77.34726150561087]
A novel method based on spectral Green functions is presented for the simulation of driven open quantum dynamics.
The formalism shows remarkable analogies to the use of Green functions in quantum field theory.
The method dramatically reduces computational cost compared with simulations based on solving the full master equation.
arXiv Detail & Related papers (2020-06-04T09:41:08Z) - Semiparametric Nonlinear Bipartite Graph Representation Learning with
Provable Guarantees [106.91654068632882]
We consider the bipartite graph and formalize its representation learning problem as a statistical estimation problem of parameters in a semiparametric exponential family distribution.
We show that the proposed objective is strongly convex in a neighborhood around the ground truth, so that a gradient descent-based method achieves linear convergence rate.
Our estimator is robust to any model misspecification within the exponential family, which is validated in extensive experiments.
arXiv Detail & Related papers (2020-03-02T16:40:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.