Solving and learning advective multiscale Darcian dynamics with the Neural Basis Method
- URL: http://arxiv.org/abs/2602.17776v1
- Date: Thu, 19 Feb 2026 19:17:55 GMT
- Title: Solving and learning advective multiscale Darcian dynamics with the Neural Basis Method
- Authors: Yuhe Wang, Min Wang,
- Abstract summary: We introduce the Neural Basis Method, a projection-based formulation that couples a physics-conforming neural basis space with an operator-induced residual metric.<n>Our method produce accurate and robust solutions in single solves and enable fast and effective parametric inference with operator learning.
- Score: 4.331539387944184
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Physics-governed models are increasingly paired with machine learning for accelerated predictions, yet most "physics--informed" formulations treat the governing equations as a penalty loss whose scale and meaning are set by heuristic balancing. This blurs operator structure, thereby confounding solution approximation error with governing-equation enforcement error and making the solving and learning progress hard to interpret and control. Here we introduce the Neural Basis Method, a projection-based formulation that couples a predefined, physics-conforming neural basis space with an operator-induced residual metric to obtain a well-conditioned deterministic minimization. Stability and reliability then hinge on this metric: the residual is not merely an optimization objective but a computable certificate tied to approximation and enforcement, remaining stable under basis enrichment and yielding reduced coordinates that are learnable across parametric instances. We use advective multiscale Darcian dynamics as a concrete demonstration of this broader point. Our method produce accurate and robust solutions in single solves and enable fast and effective parametric inference with operator learning.
Related papers
- Efficient quantum machine learning with inverse-probability algebraic corrections [2.7412662946127764]
Quantum neural networks (QNNs) provide expressive probabilistic models by leveraging quantum superposition and entanglement.<n>Quantum neural networks (QNNs) provide expressive probabilistic models by leveraging quantum superposition and entanglement.<n>Existing training approaches largely rely on gradient-based procedural optimization.
arXiv Detail & Related papers (2026-01-23T11:28:53Z) - Convergence Rates for Learning Pseudo-Differential Operators [1.1559118525005183]
We formulate learning over elliptic pseudo-differential operators as a structured infinite-dimensional regression problem with multiscale sparsity.<n>We show that the learned operator induces an efficient and stable Galerkin solver whose numerical error matches its statistical accuracy.<n>Our results contribute to bringing together operator learning, data-driven solvers, and wavelet methods in scientific computing.
arXiv Detail & Related papers (2026-01-08T01:21:08Z) - A statistical physics framework for optimal learning [1.243080988483032]
We combine statistical physics with control theory in a unified theoretical framework to identify optimal protocols in neural network models.<n>We formulate the design of learning protocols as an optimal control problem directly on the dynamics order parameters.<n>This framework encompasses a variety of learning scenarios, optimization constraints, and control budgets.
arXiv Detail & Related papers (2025-07-10T16:39:46Z) - Certified Neural Approximations of Nonlinear Dynamics [51.01318247729693]
In safety-critical contexts, the use of neural approximations requires formal bounds on their closeness to the underlying system.<n>We propose a novel, adaptive, and parallelizable verification method based on certified first-order models.
arXiv Detail & Related papers (2025-05-21T13:22:20Z) - Actively Learning Reinforcement Learning: A Stochastic Optimal Control Approach [3.453622106101339]
We propose a framework towards achieving two intertwined objectives: (i) equipping reinforcement learning with active exploration and deliberate information gathering, and (ii) overcoming the computational intractability of optimal control law.
We approach both objectives by using reinforcement learning to compute the optimal control law.
Unlike fixed exploration and exploitation balance, caution and probing are employed automatically by the controller in real-time, even after the learning process is terminated.
arXiv Detail & Related papers (2023-09-18T18:05:35Z) - Structured Radial Basis Function Network: Modelling Diversity for
Multiple Hypotheses Prediction [51.82628081279621]
Multi-modal regression is important in forecasting nonstationary processes or with a complex mixture of distributions.
A Structured Radial Basis Function Network is presented as an ensemble of multiple hypotheses predictors for regression problems.
It is proved that this structured model can efficiently interpolate this tessellation and approximate the multiple hypotheses target distribution.
arXiv Detail & Related papers (2023-09-02T01:27:53Z) - Learning to Optimize with Stochastic Dominance Constraints [103.26714928625582]
In this paper, we develop a simple yet efficient approach for the problem of comparing uncertain quantities.
We recast inner optimization in the Lagrangian as a learning problem for surrogate approximation, which bypasses apparent intractability.
The proposed light-SD demonstrates superior performance on several representative problems ranging from finance to supply chain management.
arXiv Detail & Related papers (2022-11-14T21:54:31Z) - Annealing Optimization for Progressive Learning with Stochastic
Approximation [0.0]
We introduce a learning model designed to meet the needs of applications in which computational resources are limited.
We develop an online prototype-based learning algorithm that is formulated as an online-free gradient approximation algorithm.
The learning model can be viewed as an interpretable and progressively growing competitive neural network model to be used for supervised, unsupervised, and reinforcement learning.
arXiv Detail & Related papers (2022-09-06T21:31:01Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - Deep learning: a statistical viewpoint [120.94133818355645]
Deep learning has revealed some major surprises from a theoretical perspective.
In particular, simple gradient methods easily find near-perfect solutions to non-optimal training problems.
We conjecture that specific principles underlie these phenomena.
arXiv Detail & Related papers (2021-03-16T16:26:36Z) - Neural Control Variates [71.42768823631918]
We show that a set of neural networks can face the challenge of finding a good approximation of the integrand.
We derive a theoretically optimal, variance-minimizing loss function, and propose an alternative, composite loss for stable online training in practice.
Specifically, we show that the learned light-field approximation is of sufficient quality for high-order bounces, allowing us to omit the error correction and thereby dramatically reduce the noise at the cost of negligible visible bias.
arXiv Detail & Related papers (2020-06-02T11:17:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.