Hard constraint learning approaches with trainable influence functions for evolutionary equations
- URL: http://arxiv.org/abs/2502.17497v3
- Date: Mon, 24 Mar 2025 09:24:33 GMT
- Title: Hard constraint learning approaches with trainable influence functions for evolutionary equations
- Authors: Yushi Zhang, Shuai Su, Yong Wang, Yanzhong Yao,
- Abstract summary: This paper develops a novel deep learning approach for solving evolutionary equations.<n>Sequential learning strategies divide a large temporal domain into multiple subintervals and solve them one by one in a chronological order.<n>The improved hard constraint strategy strictly ensures the continuity and smoothness of the PINN solution at time interval nodes.
- Score: 8.812375888020398
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper develops a novel deep learning approach for solving evolutionary equations, which integrates sequential learning strategies with an enhanced hard constraint strategy featuring trainable parameters, addressing the low computational accuracy of standard Physics-Informed Neural Networks (PINNs) in large temporal domains.Sequential learning strategies divide a large temporal domain into multiple subintervals and solve them one by one in a chronological order, which naturally respects the principle of causality and improves the stability of the PINN solution. The improved hard constraint strategy strictly ensures the continuity and smoothness of the PINN solution at time interval nodes, and at the same time passes the information from the previous interval to the next interval, which avoids the incorrect/trivial solution at the position far from the initial time. Furthermore, by investigating the requirements of different types of equations on hard constraints, we design a novel influence function with trainable parameters for hard constraints, which provides theoretical and technical support for the effective implementations of hard constraint strategies, and significantly improves the universality and computational accuracy of our method. In addition, an adaptive time-domain partitioning algorithm is proposed, which plays an important role in the application of the proposed method as well as in the improvement of computational efficiency and accuracy. Numerical experiments verify the performance of the method. The data and code accompanying this paper are available at https://github.com/zhizhi4452/HCS.
Related papers
- Efficiently Training Deep-Learning Parametric Policies using Lagrangian Duality [55.06411438416805]
Constrained Markov Decision Processes (CMDPs) are critical in many high-stakes applications.<n>This paper introduces a novel approach, Two-Stage Deep Decision Rules (TS- DDR) to efficiently train parametric actor policies.<n>It is shown to enhance solution quality and to reduce computation times by several orders of magnitude when compared to current state-of-the-art methods.
arXiv Detail & Related papers (2024-05-23T18:19:47Z) - A Penalty-Based Guardrail Algorithm for Non-Decreasing Optimization with Inequality Constraints [1.5498250598583487]
Traditional mathematical programming solvers require long computational times to solve constrained minimization problems.
We propose a penalty-based guardrail algorithm (PGA) to efficiently solve them.
arXiv Detail & Related papers (2024-05-03T10:37:34Z) - TSONN: Time-stepping-oriented neural network for solving partial
differential equations [1.9061608251056779]
This work integrates time-stepping method with deep learning to solve PDE problems.
The convergence of model training is significantly improved by following the trajectory of the pseudo time-stepping process.
Our results show that the proposed method achieves stable training and correct results in many problems that standard PINNs fail to solve.
arXiv Detail & Related papers (2023-10-25T09:19:40Z) - Application of deep and reinforcement learning to boundary control
problems [0.6906005491572401]
The aim is to find the optimal values for the domain boundaries such that the enclosed domain attains the desired state values.
This project explores possibilities using deep learning and reinforcement learning to solve boundary control problems.
arXiv Detail & Related papers (2023-10-21T10:56:32Z) - A unified scalable framework for causal sweeping strategies for
Physics-Informed Neural Networks (PINNs) and their temporal decompositions [22.514769448363754]
Training challenges in PINNs and XPINNs for time-dependent PDEs are discussed.
We propose a new stacked-decomposition method that bridges the gap between PINNs and XPINNs.
We also formulate a new time-sweeping collocation point algorithm inspired by the previous PINNs causality.
arXiv Detail & Related papers (2023-02-28T01:19:21Z) - Learning to Optimize with Stochastic Dominance Constraints [103.26714928625582]
In this paper, we develop a simple yet efficient approach for the problem of comparing uncertain quantities.
We recast inner optimization in the Lagrangian as a learning problem for surrogate approximation, which bypasses apparent intractability.
The proposed light-SD demonstrates superior performance on several representative problems ranging from finance to supply chain management.
arXiv Detail & Related papers (2022-11-14T21:54:31Z) - Instance-Dependent Confidence and Early Stopping for Reinforcement
Learning [99.57168572237421]
Various algorithms for reinforcement learning (RL) exhibit dramatic variation in their convergence rates as a function of problem structure.
This research provides guarantees that explain textitex post the performance differences observed.
A natural next step is to convert these theoretical guarantees into guidelines that are useful in practice.
arXiv Detail & Related papers (2022-01-21T04:25:35Z) - The Statistical Complexity of Interactive Decision Making [126.04974881555094]
We provide a complexity measure, the Decision-Estimation Coefficient, that is proven to be both necessary and sufficient for sample-efficient interactive learning.
A unified algorithm design principle, Estimation-to-Decisions (E2D), transforms any algorithm for supervised estimation into an online algorithm for decision making.
arXiv Detail & Related papers (2021-12-27T02:53:44Z) - Logistic Q-Learning [87.00813469969167]
We propose a new reinforcement learning algorithm derived from a regularized linear-programming formulation of optimal control in MDPs.
The main feature of our algorithm is a convex loss function for policy evaluation that serves as a theoretically sound alternative to the widely used squared Bellman error.
arXiv Detail & Related papers (2020-10-21T17:14:31Z) - Managing caching strategies for stream reasoning with reinforcement
learning [18.998260813058305]
Stream reasoning allows efficient decision-making over continuously changing data.
We suggest a novel approach that uses the Conflict-Driven Constraint Learning (CDCL) to efficiently update legacy solutions.
In particular, we study the applicability of reinforcement learning to continuously assess the utility of learned constraints.
arXiv Detail & Related papers (2020-08-07T15:01:41Z) - Combining Deep Learning and Optimization for Security-Constrained
Optimal Power Flow [94.24763814458686]
Security-constrained optimal power flow (SCOPF) is fundamental in power systems.
Modeling of APR within the SCOPF problem results in complex large-scale mixed-integer programs.
This paper proposes a novel approach that combines deep learning and robust optimization techniques.
arXiv Detail & Related papers (2020-07-14T12:38:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.