Input convex neural networks: universal approximation theorem and implementation for isotropic polyconvex hyperelastic energies
- URL: http://arxiv.org/abs/2502.08534v1
- Date: Wed, 12 Feb 2025 16:15:03 GMT
- Title: Input convex neural networks: universal approximation theorem and implementation for isotropic polyconvex hyperelastic energies
- Authors: Gian-Luca Geuken, Patrick Kurzeja, David Wiedemann, Jörn Mosler,
- Abstract summary: This paper presents a novel framework that enforces necessary physical and mathematical constraints while simultaneously satisfying the universal mathematical constraints.<n>A universal theorem for the proposed approach is proven.<n>The proposed network can any frame-in, isotropic poly energy (provided the network is large)<n>Existing approaches identify the advantages of the proposed method.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper presents a novel framework of neural networks for isotropic hyperelasticity that enforces necessary physical and mathematical constraints while simultaneously satisfying the universal approximation theorem. The two key ingredients are an input convex network architecture and a formulation in the elementary polynomials of the signed singular values of the deformation gradient. In line with previously published networks, it can rigorously capture frame-indifference and polyconvexity - as well as further constraints like balance of angular momentum and growth conditions. However and in contrast to previous networks, a universal approximation theorem for the proposed approach is proven. To be more explicit, the proposed network can approximate any frame-indifferent, isotropic polyconvex energy (provided the network is large enough). This is possible by working with a sufficient and necessary criterion for frame-indifferent, isotropic polyconvex functions. Comparative studies with existing approaches identify the advantages of the proposed method, particularly in approximating non-polyconvex energies as well as computing polyconvex hulls.
Related papers
- A Unified Theory of Stochastic Proximal Point Methods without Smoothness [52.30944052987393]
Proximal point methods have attracted considerable interest owing to their numerical stability and robustness against imperfect tuning.
This paper presents a comprehensive analysis of a broad range of variations of the proximal point method (SPPM)
arXiv Detail & Related papers (2024-05-24T21:09:19Z) - The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - A mixed formulation for physics-informed neural networks as a potential
solver for engineering problems in heterogeneous domains: comparison with
finite element method [0.0]
Physics-informed neural networks (PINNs) are capable of finding the solution for a given boundary value problem.
We employ several ideas from the finite element method (FEM) to enhance the performance of existing PINNs in engineering problems.
arXiv Detail & Related papers (2022-06-27T08:18:08Z) - CENN: Conservative energy method based on neural network with subdomains
for solving heterogeneous problems involving complex geometries [6.782934398825898]
We propose a conservative energy method based on a neural network with (CENN)
The admissible function satisfying the essential boundary condition without boundary penalty is constructed by the radial basis function, particular solution neural network, and general neural network.
We apply the proposed method to some representative examples to demonstrate the ability of the method to model strong discontinuity, singularity, complex boundary, non-linear, and heterogeneous PDE problems.
arXiv Detail & Related papers (2021-09-25T09:52:51Z) - Joint Network Topology Inference via Structured Fusion Regularization [70.30364652829164]
Joint network topology inference represents a canonical problem of learning multiple graph Laplacian matrices from heterogeneous graph signals.
We propose a general graph estimator based on a novel structured fusion regularization.
We show that the proposed graph estimator enjoys both high computational efficiency and rigorous theoretical guarantee.
arXiv Detail & Related papers (2021-03-05T04:42:32Z) - Robust normalizing flows using Bernstein-type polynomials [31.533158456141305]
Normalizing flows (NFs) are a class of generative models that allow exact density evaluation and sampling.
We propose a framework to construct NFs based on increasing triangular maps and Bernstein-types.
We empirically demonstrate the efficacy of the proposed technique using experiments on both real-world and synthetic datasets.
arXiv Detail & Related papers (2021-02-06T04:32:05Z) - Universal Approximation Power of Deep Residual Neural Networks via
Nonlinear Control Theory [9.210074587720172]
We explain the universal approximation capabilities of deep residual neural networks through geometric nonlinear control.
Inspired by recent work establishing links between residual networks and control systems, we provide a general sufficient condition for a residual network to have the power of universal approximation.
arXiv Detail & Related papers (2020-07-12T14:53:30Z) - Generalization bound of globally optimal non-convex neural network
training: Transportation map estimation by infinite dimensional Langevin
dynamics [50.83356836818667]
We introduce a new theoretical framework to analyze deep learning optimization with connection to its generalization error.
Existing frameworks such as mean field theory and neural tangent kernel theory for neural network optimization analysis typically require taking limit of infinite width of the network to show its global convergence.
arXiv Detail & Related papers (2020-07-11T18:19:50Z) - Neural Operator: Graph Kernel Network for Partial Differential Equations [57.90284928158383]
This work is to generalize neural networks so that they can learn mappings between infinite-dimensional spaces (operators)
We formulate approximation of the infinite-dimensional mapping by composing nonlinear activation functions and a class of integral operators.
Experiments confirm that the proposed graph kernel network does have the desired properties and show competitive performance compared to the state of the art solvers.
arXiv Detail & Related papers (2020-03-07T01:56:20Z) - Neural Proximal/Trust Region Policy Optimization Attains Globally
Optimal Policy [119.12515258771302]
We show that a variant of PPOO equipped with over-parametrization converges to globally optimal networks.
The key to our analysis is the iterate of infinite gradient under a notion of one-dimensional monotonicity, where the gradient and are instant by networks.
arXiv Detail & Related papers (2019-06-25T03:20:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.