Stress representations for tensor basis neural networks: alternative
formulations to Finger-Rivlin-Ericksen
- URL: http://arxiv.org/abs/2308.11080v1
- Date: Mon, 21 Aug 2023 23:28:26 GMT
- Title: Stress representations for tensor basis neural networks: alternative
formulations to Finger-Rivlin-Ericksen
- Authors: Jan N. Fuhg, Nikolaos Bouklas, Reese E. Jones
- Abstract summary: We survey a variety of tensor neural network models for modeling hyperelastic deformation materials in a finite context.
We compare potential-based and coefficient-based approaches, as well as different calibration techniques.
Nine variants are tested against both noisy and noiseless datasets for three different materials.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Data-driven constitutive modeling frameworks based on neural networks and
classical representation theorems have recently gained considerable attention
due to their ability to easily incorporate constitutive constraints and their
excellent generalization performance. In these models, the stress prediction
follows from a linear combination of invariant-dependent coefficient functions
and known tensor basis generators. However, thus far the formulations have been
limited to stress representations based on the classical Rivlin and Ericksen
form, while the performance of alternative representations has yet to be
investigated. In this work, we survey a variety of tensor basis neural network
models for modeling hyperelastic materials in a finite deformation context,
including a number of so far unexplored formulations which use theoretically
equivalent invariants and generators to Finger-Rivlin-Ericksen. Furthermore, we
compare potential-based and coefficient-based approaches, as well as different
calibration techniques. Nine variants are tested against both noisy and
noiseless datasets for three different materials. Theoretical and practical
insights into the performance of each formulation are given.
Related papers
- Generative Modeling of Neural Dynamics via Latent Stochastic Differential Equations [1.5467259918426441]
We propose a framework for developing computational models of biological neural systems.
We employ a system of coupled differential equations with differentiable drift and diffusion functions.
We show that these hybrid models achieve competitive performance in predicting stimulus-evoked neural and behavioral responses.
arXiv Detail & Related papers (2024-12-01T09:36:03Z) - Dynamic Post-Hoc Neural Ensemblers [55.15643209328513]
In this study, we explore employing neural networks as ensemble methods.
Motivated by the risk of learning low-diversity ensembles, we propose regularizing the model by randomly dropping base model predictions.
We demonstrate this approach lower bounds the diversity within the ensemble, reducing overfitting and improving generalization capabilities.
arXiv Detail & Related papers (2024-10-06T15:25:39Z) - Scaling and renormalization in high-dimensional regression [72.59731158970894]
This paper presents a succinct derivation of the training and generalization performance of a variety of high-dimensional ridge regression models.
We provide an introduction and review of recent results on these topics, aimed at readers with backgrounds in physics and deep learning.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - Energy-conserving equivariant GNN for elasticity of lattice architected metamaterials [3.7852720324045444]
We generate a big dataset of structure-property relationships for strut-based lattices.
The dataset is made available to the community which can fuel the development of methods anchored in physical principles.
We present a higher-order GNN model trained on this dataset.
arXiv Detail & Related papers (2024-01-30T11:25:49Z) - Structured Radial Basis Function Network: Modelling Diversity for
Multiple Hypotheses Prediction [51.82628081279621]
Multi-modal regression is important in forecasting nonstationary processes or with a complex mixture of distributions.
A Structured Radial Basis Function Network is presented as an ensemble of multiple hypotheses predictors for regression problems.
It is proved that this structured model can efficiently interpolate this tessellation and approximate the multiple hypotheses target distribution.
arXiv Detail & Related papers (2023-09-02T01:27:53Z) - Equivariant vector field network for many-body system modeling [65.22203086172019]
Equivariant Vector Field Network (EVFN) is built on a novel equivariant basis and the associated scalarization and vectorization layers.
We evaluate our method on predicting trajectories of simulated Newton mechanics systems with both full and partially observed data.
arXiv Detail & Related papers (2021-10-26T14:26:25Z) - A deep learning driven pseudospectral PCE based FFT homogenization
algorithm for complex microstructures [68.8204255655161]
It is shown that the proposed method is able to predict central moments of interest while being magnitudes faster to evaluate than traditional approaches.
It is shown, that the proposed method is able to predict central moments of interest while being magnitudes faster to evaluate than traditional approaches.
arXiv Detail & Related papers (2021-10-26T07:02:14Z) - Instance-Based Neural Dependency Parsing [56.63500180843504]
We develop neural models that possess an interpretable inference process for dependency parsing.
Our models adopt instance-based inference, where dependency edges are extracted and labeled by comparing them to edges in a training set.
arXiv Detail & Related papers (2021-09-28T05:30:52Z) - Approximate Latent Force Model Inference [1.3927943269211591]
latent force models offer an interpretable alternative to purely data driven tools for inference in dynamical systems.
We show that a neural operator approach can scale our model to thousands of instances, enabling fast, distributed computation.
arXiv Detail & Related papers (2021-09-24T09:55:00Z) - Tensor-Train Networks for Learning Predictive Modeling of
Multidimensional Data [0.0]
A promising strategy is based on tensor networks, which have been very successful in physical and chemical applications.
We show that the weights of a multidimensional regression model can be learned by means of tensor networks with the aim of performing a powerful compact representation.
An algorithm based on alternating least squares has been proposed for approximating the weights in TT-format with a reduction of computational power.
arXiv Detail & Related papers (2021-01-22T16:14:38Z) - Supervised Autoencoders Learn Robust Joint Factor Models of Neural
Activity [2.8402080392117752]
neuroscience applications collect high-dimensional predictors' corresponding to brain activity in different regions along with behavioral outcomes.
Joint factor models for the predictors and outcomes are natural, but maximum likelihood estimates of these models can struggle in practice when there is model misspecification.
We propose an alternative inference strategy based on supervised autoencoders; rather than placing a probability distribution on the latent factors, we define them as an unknown function of the high-dimensional predictors.
arXiv Detail & Related papers (2020-04-10T19:31:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.