Neural Lumped Parameter Differential Equations with Application in
Friction-Stir Processing
- URL: http://arxiv.org/abs/2304.09047v1
- Date: Tue, 18 Apr 2023 15:11:27 GMT
- Title: Neural Lumped Parameter Differential Equations with Application in
Friction-Stir Processing
- Authors: James Koch, WoongJo Choi, Ethan King, David Garcia, Hrishikesh Das,
Tianhao Wang, Ken Ross, Keerti Kappagantula
- Abstract summary: Lumped parameter methods aim to simplify the evolution of spatially-extended or continuous physical systems.
We build upon the notion of the Universal Differential Equation to construct data-driven models for reducing dynamics to that of a lumped parameter.
- Score: 2.158307833088858
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Lumped parameter methods aim to simplify the evolution of spatially-extended
or continuous physical systems to that of a "lumped" element representative of
the physical scales of the modeled system. For systems where the definition of
a lumped element or its associated physics may be unknown, modeling tasks may
be restricted to full-fidelity simulations of the physics of a system. In this
work, we consider data-driven modeling tasks with limited point-wise
measurements of otherwise continuous systems. We build upon the notion of the
Universal Differential Equation (UDE) to construct data-driven models for
reducing dynamics to that of a lumped parameter and inferring its properties.
The flexibility of UDEs allow for composing various known physical priors
suitable for application-specific modeling tasks, including lumped parameter
methods. The motivating example for this work is the plunge and dwell stages
for friction-stir welding; specifically, (i) mapping power input into the tool
to a point-measurement of temperature and (ii) using this learned mapping for
process control.
Related papers
- Latent Space Energy-based Neural ODEs [73.01344439786524]
This paper introduces a novel family of deep dynamical models designed to represent continuous-time sequence data.
We train the model using maximum likelihood estimation with Markov chain Monte Carlo.
Experiments on oscillating systems, videos and real-world state sequences (MuJoCo) illustrate that ODEs with the learnable energy-based prior outperform existing counterparts.
arXiv Detail & Related papers (2024-09-05T18:14:22Z) - Discovering Interpretable Physical Models using Symbolic Regression and
Discrete Exterior Calculus [55.2480439325792]
We propose a framework that combines Symbolic Regression (SR) and Discrete Exterior Calculus (DEC) for the automated discovery of physical models.
DEC provides building blocks for the discrete analogue of field theories, which are beyond the state-of-the-art applications of SR to physical problems.
We prove the effectiveness of our methodology by re-discovering three models of Continuum Physics from synthetic experimental data.
arXiv Detail & Related papers (2023-10-10T13:23:05Z) - Reduced order modeling of parametrized systems through autoencoders and
SINDy approach: continuation of periodic solutions [0.0]
This work presents a data-driven, non-intrusive framework which combines ROM construction with reduced dynamics identification.
The proposed approach leverages autoencoder neural networks with parametric sparse identification of nonlinear dynamics (SINDy) to construct a low-dimensional dynamical model.
These aim at tracking the evolution of periodic steady-state responses as functions of system parameters, avoiding the computation of the transient phase, and allowing to detect instabilities and bifurcations.
arXiv Detail & Related papers (2022-11-13T01:57:18Z) - Differentiable physics-enabled closure modeling for Burgers' turbulence [0.0]
We discuss an approach using the differentiable physics paradigm that combines known physics with machine learning to develop closure models for turbulence problems.
We train a series of models that incorporate varying degrees of physical assumptions on an a posteriori loss function to test the efficacy of models.
We find that constraining models with inductive biases in the form of partial differential equations that contain known physics or existing closure approaches produces highly data-efficient, accurate, and generalizable models.
arXiv Detail & Related papers (2022-09-23T14:38:01Z) - Neural Implicit Representations for Physical Parameter Inference from a Single Video [49.766574469284485]
We propose to combine neural implicit representations for appearance modeling with neural ordinary differential equations (ODEs) for modelling physical phenomena.
Our proposed model combines several unique advantages: (i) Contrary to existing approaches that require large training datasets, we are able to identify physical parameters from only a single video.
The use of neural implicit representations enables the processing of high-resolution videos and the synthesis of photo-realistic images.
arXiv Detail & Related papers (2022-04-29T11:55:35Z) - Capturing Actionable Dynamics with Structured Latent Ordinary
Differential Equations [68.62843292346813]
We propose a structured latent ODE model that captures system input variations within its latent representation.
Building on a static variable specification, our model learns factors of variation for each input to the system, thus separating the effects of the system inputs in the latent space.
arXiv Detail & Related papers (2022-02-25T20:00:56Z) - Mixed Effects Neural ODE: A Variational Approximation for Analyzing the
Dynamics of Panel Data [50.23363975709122]
We propose a probabilistic model called ME-NODE to incorporate (fixed + random) mixed effects for analyzing panel data.
We show that our model can be derived using smooth approximations of SDEs provided by the Wong-Zakai theorem.
We then derive Evidence Based Lower Bounds for ME-NODE, and develop (efficient) training algorithms.
arXiv Detail & Related papers (2022-02-18T22:41:51Z) - AdjointNet: Constraining machine learning models with physics-based
codes [0.17205106391379021]
This paper proposes a physics constrained machine learning framework, AdjointNet, allowing domain scientists to embed their physics code in neural network training.
We show that the proposed AdjointNet framework can be used for parameter estimation (and uncertainty quantification by extension) and experimental design using active learning.
arXiv Detail & Related papers (2021-09-08T22:43:44Z) - Using Data Assimilation to Train a Hybrid Forecast System that Combines
Machine-Learning and Knowledge-Based Components [52.77024349608834]
We consider the problem of data-assisted forecasting of chaotic dynamical systems when the available data is noisy partial measurements.
We show that by using partial measurements of the state of the dynamical system, we can train a machine learning model to improve predictions made by an imperfect knowledge-based model.
arXiv Detail & Related papers (2021-02-15T19:56:48Z) - A physics-informed operator regression framework for extracting
data-driven continuum models [0.0]
We present a framework for discovering continuum models from high fidelity molecular simulation data.
Our approach applies a neural network parameterization of governing physics in modal space.
We demonstrate the effectiveness of our framework for a variety of physics, including local and nonlocal diffusion processes and single and multiphase flows.
arXiv Detail & Related papers (2020-09-25T01:13:51Z) - Automatic Differentiation and Continuous Sensitivity Analysis of Rigid
Body Dynamics [15.565726546970678]
We introduce a differentiable physics simulator for rigid body dynamics.
In the context of trajectory optimization, we introduce a closed-loop model-predictive control algorithm.
arXiv Detail & Related papers (2020-01-22T03:54:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.