Control and optimization for Neural Partial Differential Equations in Supervised Learning
- URL: http://arxiv.org/abs/2506.20764v1
- Date: Wed, 25 Jun 2025 18:54:48 GMT
- Title: Control and optimization for Neural Partial Differential Equations in Supervised Learning
- Authors: Alain Bensoussan, Minh-Binh Tran, Bangjie Wang,
- Abstract summary: We aim to initiate a line of research in control theory focused on optimizing and controlling the coefficients of parabolic and hyperbolic operators.<n>In supervised learning, the primary objective is to transport initial data toward target data through the layers of a neural network.<n>We propose a novel perspective: neural networks can be interpreted as partial differential equations (PDEs)
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although there is a substantial body of literature on control and optimization problems for parabolic and hyperbolic systems, the specific problem of controlling and optimizing the coefficients of the associated operators within such systems has not yet been thoroughly explored. In this work, we aim to initiate a line of research in control theory focused on optimizing and controlling the coefficients of these operators-a problem that naturally arises in the context of neural networks and supervised learning. In supervised learning, the primary objective is to transport initial data toward target data through the layers of a neural network. We propose a novel perspective: neural networks can be interpreted as partial differential equations (PDEs). From this viewpoint, the control problem traditionally studied in the context of ordinary differential equations (ODEs) is reformulated as a control problem for PDEs, specifically targeting the optimization and control of coefficients in parabolic and hyperbolic operators. To the best of our knowledge, this specific problem has not yet been systematically addressed in the control theory of PDEs. To this end, we propose a dual system formulation for the control and optimization problem associated with parabolic PDEs, laying the groundwork for the development of efficient numerical schemes in future research. We also provide a theoretical proof showing that the control and optimization problem for parabolic PDEs admits minimizers. Finally, we investigate the control problem associated with hyperbolic PDEs and prove the existence of solutions for a corresponding approximated control problem.
Related papers
- End-to-End Learning Framework for Solving Non-Markovian Optimal Control [9.156265463755807]
We propose an innovative system identification method control strategy for FOLTI systems.<n>We also develop the first end-to-end data-driven learning framework, Fractional-Order Learning for Optimal Control (FOLOC)
arXiv Detail & Related papers (2025-02-07T04:18:56Z) - HypeRL: Parameter-Informed Reinforcement Learning for Parametric PDEs [0.6249768559720122]
We devise a new, general-purpose reinforcement learning strategy for the optimal control of PDEs.<n>HypeRL aims at approximating the optimal control policy directly.<n>We validate the proposed approach on two PDE-constrained optimal control benchmarks.
arXiv Detail & Related papers (2025-01-08T14:38:03Z) - Interpretable and Efficient Data-driven Discovery and Control of Distributed Systems [1.5195865840919498]
Reinforcement Learning (RL) has emerged as a promising control paradigm for systems with high-dimensional, nonlinear dynamics.
We propose a data-efficient, interpretable, and scalable framework for PDE control.
arXiv Detail & Related papers (2024-11-06T18:26:19Z) - Diffusion Models as Network Optimizers: Explorations and Analysis [71.69869025878856]
generative diffusion models (GDMs) have emerged as a promising new approach to network optimization.<n>In this study, we first explore the intrinsic characteristics of generative models.<n>We provide a concise theoretical and intuitive demonstration of the advantages of generative models over discriminative network optimization.
arXiv Detail & Related papers (2024-11-01T09:05:47Z) - Stochastic Optimal Control Matching [53.156277491861985]
Our work introduces Optimal Control Matching (SOCM), a novel Iterative Diffusion Optimization (IDO) technique for optimal control.
The control is learned via a least squares problem by trying to fit a matching vector field.
Experimentally, our algorithm achieves lower error than all the existing IDO techniques for optimal control.
arXiv Detail & Related papers (2023-12-04T16:49:43Z) - A Stable and Scalable Method for Solving Initial Value PDEs with Neural
Networks [52.5899851000193]
We develop an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters.
We show that current methods based on this approach suffer from two key issues.
First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors.
arXiv Detail & Related papers (2023-04-28T17:28:18Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.<n>We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.<n>Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Optimal control for state preparation in two-qubit open quantum systems
driven by coherent and incoherent controls via GRAPE approach [77.34726150561087]
We consider a model of two qubits driven by coherent and incoherent time-dependent controls.
The dynamics of the system is governed by a Gorini-Kossakowski-Sudarshan-Lindblad master equation.
We study evolution of the von Neumann entropy, purity, and one-qubit reduced density matrices under optimized controls.
arXiv Detail & Related papers (2022-11-04T15:20:18Z) - Near-optimal control of dynamical systems with neural ordinary
differential equations [0.0]
Recent advances in deep learning and neural network-based optimization have contributed to the development of methods that can help solve control problems involving high-dimensional dynamical systems.
We first analyze how truncated and non-truncated backpropagation through time affect runtime performance and the ability of neural networks to learn optimal control functions.
arXiv Detail & Related papers (2022-06-22T14:11:11Z) - LordNet: An Efficient Neural Network for Learning to Solve Parametric Partial Differential Equations without Simulated Data [47.49194807524502]
We propose LordNet, a tunable and efficient neural network for modeling entanglements.
The experiments on solving Poisson's equation and (2D and 3D) Navier-Stokes equation demonstrate that the long-range entanglements can be well modeled by the LordNet.
arXiv Detail & Related papers (2022-06-19T14:41:08Z) - Physics-informed neural networks for PDE-constrained optimization and
control [0.0]
Control Physics-Informed Neural Networks simultaneously solve a given system state, and its respective optimal control.
The success of Control PINNs is demonstrated by solving the following open-loop optimal control problems.
arXiv Detail & Related papers (2022-05-06T17:22:36Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - Adaptive Control and Regret Minimization in Linear Quadratic Gaussian
(LQG) Setting [91.43582419264763]
We propose LqgOpt, a novel reinforcement learning algorithm based on the principle of optimism in the face of uncertainty.
LqgOpt efficiently explores the system dynamics, estimates the model parameters up to their confidence interval, and deploys the controller of the most optimistic model.
arXiv Detail & Related papers (2020-03-12T19:56:38Z) - Learning to Control PDEs with Differentiable Physics [102.36050646250871]
We present a novel hierarchical predictor-corrector scheme which enables neural networks to learn to understand and control complex nonlinear physical systems over long time frames.
We demonstrate that our method successfully develops an understanding of complex physical systems and learns to control them for tasks involving PDEs.
arXiv Detail & Related papers (2020-01-21T11:58:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.