Scaling physics-informed hard constraints with mixture-of-experts
- URL: http://arxiv.org/abs/2402.13412v1
- Date: Tue, 20 Feb 2024 22:45:00 GMT
- Title: Scaling physics-informed hard constraints with mixture-of-experts
- Authors: Nithin Chalapathi and Yiheng Du and Aditi Krishnapriyan
- Abstract summary: We develop a scalable approach to enforce hard physical constraints using Mixture-of-Experts (MoE)
MoE imposes the constraint over smaller domains, each of which is solved by an "expert" through differentiable optimization.
Compared to standard differentiable optimization, our scalable approach achieves greater accuracy in the neural PDE solver setting.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Imposing known physical constraints, such as conservation laws, during neural
network training introduces an inductive bias that can improve accuracy,
reliability, convergence, and data efficiency for modeling physical dynamics.
While such constraints can be softly imposed via loss function penalties,
recent advancements in differentiable physics and optimization improve
performance by incorporating PDE-constrained optimization as individual layers
in neural networks. This enables a stricter adherence to physical constraints.
However, imposing hard constraints significantly increases computational and
memory costs, especially for complex dynamical systems. This is because it
requires solving an optimization problem over a large number of points in a
mesh, representing spatial and temporal discretizations, which greatly
increases the complexity of the constraint. To address this challenge, we
develop a scalable approach to enforce hard physical constraints using
Mixture-of-Experts (MoE), which can be used with any neural network
architecture. Our approach imposes the constraint over smaller decomposed
domains, each of which is solved by an "expert" through differentiable
optimization. During training, each expert independently performs a localized
backpropagation step by leveraging the implicit function theorem; the
independence of each expert allows for parallelization across multiple GPUs.
Compared to standard differentiable optimization, our scalable approach
achieves greater accuracy in the neural PDE solver setting for predicting the
dynamics of challenging non-linear systems. We also improve training stability
and require significantly less computation time during both training and
inference stages.
Related papers
- Text2PDE: Latent Diffusion Models for Accessible Physics Simulation [7.16525545814044]
We introduce several methods to apply latent diffusion models to physics simulation.
We show that the proposed approach is competitive with current neural PDE solvers in both accuracy and efficiency.
By introducing a scalable, accurate, and usable physics simulator, we hope to bring neural PDE solvers closer to practical use.
arXiv Detail & Related papers (2024-10-02T01:09:47Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Neural Fields with Hard Constraints of Arbitrary Differential Order [61.49418682745144]
We develop a series of approaches for enforcing hard constraints on neural fields.
The constraints can be specified as a linear operator applied to the neural field and its derivatives.
Our approaches are demonstrated in a wide range of real-world applications.
arXiv Detail & Related papers (2023-06-15T08:33:52Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Physics-aware deep learning framework for linear elasticity [0.0]
The paper presents an efficient and robust data-driven deep learning (DL) computational framework for linear continuum elasticity problems.
For an accurate representation of the field variables, a multi-objective loss function is proposed.
Several benchmark problems including the Airimaty solution to elasticity and the Kirchhoff-Love plate problem are solved.
arXiv Detail & Related papers (2023-02-19T20:33:32Z) - Learning differentiable solvers for systems with hard constraints [48.54197776363251]
We introduce a practical method to enforce partial differential equation (PDE) constraints for functions defined by neural networks (NNs)
We develop a differentiable PDE-constrained layer that can be incorporated into any NN architecture.
Our results show that incorporating hard constraints directly into the NN architecture achieves much lower test error when compared to training on an unconstrained objective.
arXiv Detail & Related papers (2022-07-18T15:11:43Z) - Physics-constrained Unsupervised Learning of Partial Differential
Equations using Meshes [1.066048003460524]
Graph neural networks show promise in accurately representing irregularly meshed objects and learning their dynamics.
In this work, we represent meshes naturally as graphs, process these using Graph Networks, and formulate our physics-based loss to provide an unsupervised learning framework for partial differential equations (PDE)
Our framework will enable the application of PDE solvers in interactive settings, such as model-based control of soft-body deformations.
arXiv Detail & Related papers (2022-03-30T19:22:56Z) - Neural Stochastic Dual Dynamic Programming [99.80617899593526]
We introduce a trainable neural model that learns to map problem instances to a piece-wise linear value function.
$nu$-SDDP can significantly reduce problem solving cost without sacrificing solution quality.
arXiv Detail & Related papers (2021-12-01T22:55:23Z) - Physics and Equality Constrained Artificial Neural Networks: Application
to Partial Differential Equations [1.370633147306388]
Physics-informed neural networks (PINNs) have been proposed to learn the solution of partial differential equations (PDE)
Here, we show that this specific way of formulating the objective function is the source of severe limitations in the PINN approach.
We propose a versatile framework that can tackle both inverse and forward problems.
arXiv Detail & Related papers (2021-09-30T05:55:35Z) - Scalable Differentiable Physics for Learning and Control [99.4302215142673]
Differentiable physics is a powerful approach to learning and control problems that involve physical objects and environments.
We develop a scalable framework for differentiable physics that can support a large number of objects and their interactions.
arXiv Detail & Related papers (2020-07-04T19:07:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.