Generalized invariants meet constitutive neural networks: A novel framework for hyperelastic materials
- URL: http://arxiv.org/abs/2508.12063v3
- Date: Wed, 17 Sep 2025 22:51:49 GMT
- Title: Generalized invariants meet constitutive neural networks: A novel framework for hyperelastic materials
- Authors: Denisa Martonová, Alain Goriely, Ellen Kuhl,
- Abstract summary: We introduce a new data-driven framework that simultaneously discovers appropriate invariants and models for isotropic incompressible materials.<n>Our approach identifies both the most suitable invariants in a class of generalized invariants and the corresponding strain energy function.<n>By looking at a continuous family of possible invariants, the model can flexibly adapt to different material behaviors.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The major challenge in determining a hyperelastic model for a given material is the choice of invariants and the selection how the strain energy function depends functionally on these invariants. Here we introduce a new data-driven framework that simultaneously discovers appropriate invariants and constitutive models for isotropic incompressible hyperelastic materials. Our approach identifies both the most suitable invariants in a class of generalized invariants and the corresponding strain energy function directly from experimental observations. Unlike previous methods that rely on fixed invariant choices or sequential fitting procedures, our method integrates the discovery process into a single neural network architecture. By looking at a continuous family of possible invariants, the model can flexibly adapt to different material behaviors. We demonstrate the effectiveness of this approach using popular benchmark datasets for rubber and brain tissue. For rubber, the method recovers a stretch-dominated formulation consistent with classical models. For brain tissue, it identifies a formulation sensitive to small stretches, capturing the nonlinear shear response characteristic of soft biological matter. Compared to traditional and neural-network-based models, our framework provides improved predictive accuracy and interpretability across a wide range of deformation states. This unified strategy offers a robust tool for automated and physically meaningful model discovery in hyperelasticity.
Related papers
- Scaling and renormalization in high-dimensional regression [72.59731158970894]
We present a unifying perspective on recent results on ridge regression.<n>We use the basic tools of random matrix theory and free probability, aimed at readers with backgrounds in physics and deep learning.<n>Our results extend and provide a unifying perspective on earlier models of scaling laws.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - Exploring hyperelastic material model discovery for human brain cortex:
multivariate analysis vs. artificial neural network approaches [10.003764827561238]
This study aims to identify the most favorable material model for human brain tissue.
We apply artificial neural network and multiple regression methods to a generalization of widely accepted classic models.
arXiv Detail & Related papers (2023-10-16T18:49:59Z) - Stress representations for tensor basis neural networks: alternative
formulations to Finger-Rivlin-Ericksen [0.0]
We survey a variety of tensor neural network models for modeling hyperelastic deformation materials in a finite context.
We compare potential-based and coefficient-based approaches, as well as different calibration techniques.
Nine variants are tested against both noisy and noiseless datasets for three different materials.
arXiv Detail & Related papers (2023-08-21T23:28:26Z) - Physics-informed UNets for Discovering Hidden Elasticity in
Heterogeneous Materials [0.0]
We develop a novel UNet-based neural network model for inversion in elasticity (El-UNet)
We show superior performance, both in terms of accuracy and computational cost, by El-UNet compared to fully-connected physics-informed neural networks.
arXiv Detail & Related papers (2023-06-01T23:35:03Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - On the Generalization and Adaption Performance of Causal Models [99.64022680811281]
Differentiable causal discovery has proposed to factorize the data generating process into a set of modules.
We study the generalization and adaption performance of such modular neural causal models.
Our analysis shows that the modular neural causal models outperform other models on both zero and few-shot adaptation in low data regimes.
arXiv Detail & Related papers (2022-06-09T17:12:32Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - A Variational Inference Approach to Inverse Problems with Gamma
Hyperpriors [60.489902135153415]
This paper introduces a variational iterative alternating scheme for hierarchical inverse problems with gamma hyperpriors.
The proposed variational inference approach yields accurate reconstruction, provides meaningful uncertainty quantification, and is easy to implement.
arXiv Detail & Related papers (2021-11-26T06:33:29Z) - A deep learning driven pseudospectral PCE based FFT homogenization
algorithm for complex microstructures [68.8204255655161]
It is shown that the proposed method is able to predict central moments of interest while being magnitudes faster to evaluate than traditional approaches.
It is shown, that the proposed method is able to predict central moments of interest while being magnitudes faster to evaluate than traditional approaches.
arXiv Detail & Related papers (2021-10-26T07:02:14Z) - Automatically Polyconvex Strain Energy Functions using Neural Ordinary
Differential Equations [0.0]
Deep neural networks are able to learn complex material without the constraints of form approximations.
N-ODE material model is able to capture synthetic data generated from closedform material models.
framework can be used to model a large class of materials.
arXiv Detail & Related papers (2021-10-03T13:11:43Z) - Supervised Autoencoders Learn Robust Joint Factor Models of Neural
Activity [2.8402080392117752]
neuroscience applications collect high-dimensional predictors' corresponding to brain activity in different regions along with behavioral outcomes.
Joint factor models for the predictors and outcomes are natural, but maximum likelihood estimates of these models can struggle in practice when there is model misspecification.
We propose an alternative inference strategy based on supervised autoencoders; rather than placing a probability distribution on the latent factors, we define them as an unknown function of the high-dimensional predictors.
arXiv Detail & Related papers (2020-04-10T19:31:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.