A Complement to Neural Networks for Anisotropic Inelasticity at Finite Strains
- URL: http://arxiv.org/abs/2510.04187v1
- Date: Sun, 05 Oct 2025 13:01:05 GMT
- Title: A Complement to Neural Networks for Anisotropic Inelasticity at Finite Strains
- Authors: Hagen Holthusen, Ellen Kuhl,
- Abstract summary: We propose a complement to modeling that augments neural networks with material principles to capture anisotropy and inelasticity at finite strains.<n>Key element is a dual potential that governs dissipation, consistently incorporates anisotropy, and-unlike conventional formulations convex-satisfies the dissipation inequality without requiring convexity.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a complement to constitutive modeling that augments neural networks with material principles to capture anisotropy and inelasticity at finite strains. The key element is a dual potential that governs dissipation, consistently incorporates anisotropy, and-unlike conventional convex formulations-satisfies the dissipation inequality without requiring convexity. Our neural network architecture employs invariant-based input representations in terms of mixed elastic, inelastic and structural tensors. It adapts Input Convex Neural Networks, and introduces Input Monotonic Neural Networks to broaden the admissible potential class. To bypass exponential-map time integration in the finite strain regime and stabilize the training of inelastic materials, we employ recurrent Liquid Neural Networks. The approach is evaluated at both material point and structural scales. We benchmark against recurrent models without physical constraints and validate predictions of deformation and reaction forces for unseen boundary value problems. In all cases, the method delivers accurate and stable performance beyond the training regime. The neural network and finite element implementations are available as open-source and are accessible to the public via https://doi.org/10.5281/zenodo.17199965.
Related papers
- Dense Neural Networks are not Universal Approximators [53.27010448621372]
We show that dense neural networks do not possess universality of arbitrary continuous functions.<n>We consider ReLU neural networks subject to natural constraints on weights and input and output dimensions.
arXiv Detail & Related papers (2026-02-07T16:52:38Z) - Diffusion-Guided Renormalization of Neural Systems via Tensor Networks [0.0]
Far from equilibrium, neural systems self-organize across multiple scales.<n>Exploiting multiscale self-organization in neuroscience and artificial intelligence requires a computational framework.<n>I develop a scalable graph inference algorithm for discovering community structure from subsampled neural activity.
arXiv Detail & Related papers (2025-10-07T18:26:10Z) - Why Neural Network Can Discover Symbolic Structures with Gradient-based Training: An Algebraic and Geometric Foundation for Neurosymbolic Reasoning [73.18052192964349]
We develop a theoretical framework that explains how discrete symbolic structures can emerge naturally from continuous neural network training dynamics.<n>By lifting neural parameters to a measure space and modeling training as Wasserstein gradient flow, we show that under geometric constraints, the parameter measure $mu_t$ undergoes two concurrent phenomena.
arXiv Detail & Related papers (2025-06-26T22:40:30Z) - Can KAN CANs? Input-convex Kolmogorov-Arnold Networks (KANs) as hyperelastic constitutive artificial neural networks (CANs) [0.0]
We present a new type of model for learning the laws of.<n>trainable material behavior.<n>The resulting models are both compact hyper-trainible interpretable.<n>relationships analytical compressionity.<n>We show that ICKANs accurately capture stress-s models across diverse diverse strain states.
arXiv Detail & Related papers (2025-03-07T17:42:24Z) - Gibbs-Duhem-Informed Neural Networks for Binary Activity Coefficient
Prediction [45.84205238554709]
We propose Gibbs-Duhem-informed neural networks for the prediction of binary activity coefficients at varying compositions.
We include the Gibbs-Duhem equation explicitly in the loss function for training neural networks.
arXiv Detail & Related papers (2023-05-31T07:36:45Z) - Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean
Field Neural Networks [47.73646927060476]
We analyze the dynamics of finite width effects in wide but finite feature learning neural networks.
Our results are non-perturbative in the strength of feature learning.
arXiv Detail & Related papers (2023-04-06T23:11:49Z) - On the Intrinsic Structures of Spiking Neural Networks [66.57589494713515]
Recent years have emerged a surge of interest in SNNs owing to their remarkable potential to handle time-dependent and event-driven data.
There has been a dearth of comprehensive studies examining the impact of intrinsic structures within spiking computations.
This work delves deep into the intrinsic structures of SNNs, by elucidating their influence on the expressivity of SNNs.
arXiv Detail & Related papers (2022-06-21T09:42:30Z) - NN-EUCLID: deep-learning hyperelasticity without stress data [0.0]
We propose a new approach for unsupervised learning of hyperelastic laws with physics-consistent deep neural networks.
In contrast to supervised learning, which assumes the stress-strain, the approach only uses realistically measurable full-elastic field displacement and global force availability data.
arXiv Detail & Related papers (2022-05-04T13:54:54Z) - Latent Equilibrium: A unified learning theory for arbitrarily fast
computation with arbitrarily slow neurons [0.7340017786387767]
We introduce Latent Equilibrium, a new framework for inference and learning in networks of slow components.
We derive disentangled neuron and synapse dynamics from a prospective energy function.
We show how our principle can be applied to detailed models of cortical microcircuitry.
arXiv Detail & Related papers (2021-10-27T16:15:55Z) - Physical Constraint Embedded Neural Networks for inference and noise
regulation [0.0]
We present methods for embedding even--odd symmetries and conservation laws in neural networks.
We demonstrate that it can accurately infer symmetries without prior knowledge.
We highlight the noise resilient properties of physical constraint embedded neural networks.
arXiv Detail & Related papers (2021-05-19T14:07:20Z) - Gradient Starvation: A Learning Proclivity in Neural Networks [97.02382916372594]
Gradient Starvation arises when cross-entropy loss is minimized by capturing only a subset of features relevant for the task.
This work provides a theoretical explanation for the emergence of such feature imbalance in neural networks.
arXiv Detail & Related papers (2020-11-18T18:52:08Z) - Generalization bound of globally optimal non-convex neural network
training: Transportation map estimation by infinite dimensional Langevin
dynamics [50.83356836818667]
We introduce a new theoretical framework to analyze deep learning optimization with connection to its generalization error.
Existing frameworks such as mean field theory and neural tangent kernel theory for neural network optimization analysis typically require taking limit of infinite width of the network to show its global convergence.
arXiv Detail & Related papers (2020-07-11T18:19:50Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - ResiliNet: Failure-Resilient Inference in Distributed Neural Networks [56.255913459850674]
We introduce ResiliNet, a scheme for making inference in distributed neural networks resilient to physical node failures.
Failout simulates physical node failure conditions during training using dropout, and is specifically designed to improve the resiliency of distributed neural networks.
arXiv Detail & Related papers (2020-02-18T05:58:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.