Polyconvex Physics-Augmented Neural Network Constitutive Models in Principal Stretches
- URL: http://arxiv.org/abs/2503.00575v1
- Date: Sat, 01 Mar 2025 17:55:09 GMT
- Title: Polyconvex Physics-Augmented Neural Network Constitutive Models in Principal Stretches
- Authors: Adrian Buganza Tepole, Asghar Jadoon, Manuel Rausch, Jan N. Fuhg,
- Abstract summary: We show that a convex function can be described with a second-order and symmetric function.<n>The ability of the model to capture arbitrary materials is demonstrated using synthetic and experimental data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Accurate constitutive models of soft materials are crucial for understanding their mechanical behavior and ensuring reliable predictions in the design process. To this end, scientific machine learning research has produced flexible and general material model architectures that can capture the behavior of a wide range of materials, reducing the need for expert-constructed closed-form models. The focus has gradually shifted towards embedding physical constraints in the network architecture to regularize these over-parameterized models. Two popular approaches are input convex neural networks (ICNN) and neural ordinary differential equations (NODE). A related alternative has been the generalization of closed-form models, such as sparse regression from a large library. Remarkably, all prior work using ICNN or NODE uses the invariants of the Cauchy-Green tensor and none uses the principal stretches. In this work, we construct general polyconvex functions of the principal stretches in a physics-aware deep-learning framework and offer insights and comparisons to invariant-based formulations. The framework is based on recent developments to characterize polyconvex functions in terms of convex functions of the right stretch tensor $\mathbf{U}$, its cofactor $\text{cof}\mathbf{U}$, and its determinant $J$. Any convex function of a symmetric second-order tensor can be described with a convex and symmetric function of its eigenvalues. Thus, we first describe convex functions of $\mathbf{U}$ and $\text{cof}\mathbf{U}$ in terms of their respective eigenvalues using deep Holder sets composed with ICNN functions. A third ICNN takes as input $J$ and the two convex functions of $\mathbf{U}$ and $\text{cof}\mathbf{U}$, and returns the strain energy as output. The ability of the model to capture arbitrary materials is demonstrated using synthetic and experimental data.
Related papers
- DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.<n>To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.<n> Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Unveiling Induction Heads: Provable Training Dynamics and Feature Learning in Transformers [54.20763128054692]
We study how a two-attention-layer transformer is trained to perform ICL on $n$-gram Markov chain data.
We prove that the gradient flow with respect to a cross-entropy ICL loss converges to a limiting model.
arXiv Detail & Related papers (2024-09-09T18:10:26Z) - Scaling and renormalization in high-dimensional regression [72.59731158970894]
This paper presents a succinct derivation of the training and generalization performance of a variety of high-dimensional ridge regression models.
We provide an introduction and review of recent results on these topics, aimed at readers with backgrounds in physics and deep learning.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Quantized Fourier and Polynomial Features for more Expressive Tensor
Network Models [9.18287948559108]
We exploit the tensor structure present in the features by constraining the model weights to be an underparametrized tensor network.
We show that, for the same number of model parameters, the resulting quantized models have a higher bound on the VC-dimension as opposed to their non-quantized counterparts.
arXiv Detail & Related papers (2023-09-11T13:18:19Z) - Distribution learning via neural differential equations: a nonparametric
statistical perspective [1.4436965372953483]
This work establishes the first general statistical convergence analysis for distribution learning via ODE models trained through likelihood transformations.
We show that the latter can be quantified via the $C1$-metric entropy of the class $mathcal F$.
We then apply this general framework to the setting of $Ck$-smooth target densities, and establish nearly minimax-optimal convergence rates for two relevant velocity field classes $mathcal F$: $Ck$ functions and neural networks.
arXiv Detail & Related papers (2023-09-03T00:21:37Z) - FAENet: Frame Averaging Equivariant GNN for Materials Modeling [123.19473575281357]
We introduce a flexible framework relying on frameaveraging (SFA) to make any model E(3)-equivariant or invariant through data transformations.
We prove the validity of our method theoretically and empirically demonstrate its superior accuracy and computational scalability in materials modeling.
arXiv Detail & Related papers (2023-04-28T21:48:31Z) - Neural Implicit Manifold Learning for Topology-Aware Density Estimation [15.878635603835063]
Current generative models learn $mathcalM$ by mapping an $m$-dimensional latent variable through a neural network.
We show that our model can learn manifold-supported distributions with complex topologies more accurately than pushforward models.
arXiv Detail & Related papers (2022-06-22T18:00:00Z) - Fundamental tradeoffs between memorization and robustness in random
features and neural tangent regimes [15.76663241036412]
We prove for a large class of activation functions that, if the model memorizes even a fraction of the training, then its Sobolev-seminorm is lower-bounded.
Experiments reveal for the first time, (iv) a multiple-descent phenomenon in the robustness of the min-norm interpolator.
arXiv Detail & Related papers (2021-06-04T17:52:50Z) - Linear Dilation-Erosion Perceptron Trained Using a Convex-Concave
Procedure [1.3706331473063877]
We present the textitlinear dilation-erosion perceptron ($ell$-DEP), which is given by applying linear transformations before computing a dilation and an erosion.
We compare the performance of the $ell$-DEP model with other machine learning techniques using several classification problems.
arXiv Detail & Related papers (2020-11-11T18:37:07Z) - Fourier Neural Operator for Parametric Partial Differential Equations [57.90284928158383]
We formulate a new neural operator by parameterizing the integral kernel directly in Fourier space.
We perform experiments on Burgers' equation, Darcy flow, and Navier-Stokes equation.
It is up to three orders of magnitude faster compared to traditional PDE solvers.
arXiv Detail & Related papers (2020-10-18T00:34:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.