Numerical exploration of the range of shape functionals using neural networks
- URL: http://arxiv.org/abs/2602.14881v1
- Date: Mon, 16 Feb 2026 16:10:58 GMT
- Title: Numerical exploration of the range of shape functionals using neural networks
- Authors: Eloi Martinet, Ilias Ftouhi,
- Abstract summary: We introduce a novel numerical framework for the exploration of Blaschke--Santal diagrams.<n>We introduce a parametrization of convex bodies in arbitrary dimensions using a specific invertible neural network architecture based on gauge functions.<n>To achieve a uniform sampling inside the diagram, and thus a satisfying description, we introduce an interacting particle system that minimizes a Riesz energy functional via automatic differentiation in PyTorch.<n>The effectiveness of the method is demonstrated on several diagrams involving both geometric and PDE-type functionals for convex bodies of $mathbbR2$ and $mathbbR
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a novel numerical framework for the exploration of Blaschke--Santaló diagrams, which are efficient tools characterizing the possible inequalities relating some given shape functionals. We introduce a parametrization of convex bodies in arbitrary dimensions using a specific invertible neural network architecture based on gauge functions, allowing an intrinsic conservation of the convexity of the sets during the shape optimization process. To achieve a uniform sampling inside the diagram, and thus a satisfying description of it, we introduce an interacting particle system that minimizes a Riesz energy functional via automatic differentiation in PyTorch. The effectiveness of the method is demonstrated on several diagrams involving both geometric and PDE-type functionals for convex bodies of $\mathbb{R}^2$ and $\mathbb{R}^3$, namely, the volume, the perimeter, the moment of inertia, the torsional rigidity, the Willmore energy, and the first two Neumann eigenvalues of the Laplacian.
Related papers
- Multi-patch isogeometric neural solver for partial differential equations on computer-aided design domains [0.0]
This work develops a computational framework that combines physics-informed neural networks with multi-patch isogeometric analysis.<n>The method utilizes patch-local neural networks that operate on the reference domain of isogeometric analysis.<n>The effectiveness of the suggested method is demonstrated on two non-trivial and practically relevant use-cases.
arXiv Detail & Related papers (2025-09-29T19:57:54Z) - Self-Supervised Coarsening of Unstructured Grid with Automatic Differentiation [55.88862563823878]
In this work, we present an original algorithm to coarsen an unstructured grid based on the concepts of differentiable physics.<n>We demonstrate performance of the algorithm on two PDEs: a linear equation which governs slightly compressible fluid flow in porous media and the wave equation.<n>Our results show that in the considered scenarios, we reduced the number of grid points up to 10 times while preserving the modeled variable dynamics in the points of interest.
arXiv Detail & Related papers (2025-07-24T11:02:13Z) - Functional Neural Wavefunction Optimization [11.55213641895401]
We propose a framework for the design and analysis of optimization algorithms in variational quantum Monte Carlo.<n>The framework translates infinite-dimensional optimization dynamics into tractable parameter-space algorithms.<n>We validate our framework with numerical experiments demonstrating its practical relevance.
arXiv Detail & Related papers (2025-07-14T22:07:38Z) - DimINO: Dimension-Informed Neural Operator Learning [41.37905663176428]
DimINO is a framework inspired by dimensional analysis.<n>It can be seamlessly integrated into existing neural operator architectures.<n>It achieves up to 76.3% performance gain on PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Shape-informed surrogate models based on signed distance function domain encoding [8.052704959617207]
We propose a non-intrusive method to build surrogate models that approximate the solution of parameterized partial differential equations (PDEs)
Our approach is based on the combination of two neural networks (NNs)
arXiv Detail & Related papers (2024-09-19T01:47:04Z) - Generating function for projected entangled-pair states [0.1759252234439348]
We extend the generating function approach for tensor network diagrammatic summation.
Taking the form of a one-particle excitation, we show that the excited state can be computed efficiently in the generating function formalism.
We conclude with a discussion on generalizations to multi-particle excitations.
arXiv Detail & Related papers (2023-07-16T15:49:37Z) - Shape And Structure Preserving Differential Privacy [70.08490462870144]
We show how the gradient of the squared distance function offers better control over sensitivity than the Laplace mechanism.
We also show how using the gradient of the squared distance function offers better control over sensitivity than the Laplace mechanism.
arXiv Detail & Related papers (2022-09-21T18:14:38Z) - Counting Phases and Faces Using Bayesian Thermodynamic Integration [77.34726150561087]
We introduce a new approach to reconstruction of the thermodynamic functions and phase boundaries in two-parametric statistical mechanics systems.
We use the proposed approach to accurately reconstruct the partition functions and phase diagrams of the Ising model and the exactly solvable non-equilibrium TASEP.
arXiv Detail & Related papers (2022-05-18T17:11:23Z) - Representation Theorem for Matrix Product States [1.7894377200944511]
We investigate the universal representation capacity of the Matrix Product States (MPS) from the perspective of functions and continuous functions.
We show that MPS can accurately realize arbitrary functions by providing a construction method of the corresponding MPS structure for an arbitrarily given gate.
We study the relation between MPS and neural networks and show that the MPS with a scale-invariant sigmoidal function is equivalent to a one-hidden-layer neural network.
arXiv Detail & Related papers (2021-03-15T11:06:54Z) - UNIPoint: Universally Approximating Point Processes Intensities [125.08205865536577]
We provide a proof that a class of learnable functions can universally approximate any valid intensity function.
We implement UNIPoint, a novel neural point process model, using recurrent neural networks to parameterise sums of basis function upon each event.
arXiv Detail & Related papers (2020-07-28T09:31:56Z) - Space of Functions Computed by Deep-Layered Machines [74.13735716675987]
We study the space of functions computed by random-layered machines, including deep neural networks and Boolean circuits.
Investigating the distribution of Boolean functions computed on the recurrent and layer-dependent architectures, we find that it is the same in both models.
arXiv Detail & Related papers (2020-04-19T18:31:03Z) - Convex Geometry and Duality of Over-parameterized Neural Networks [70.15611146583068]
We develop a convex analytic approach to analyze finite width two-layer ReLU networks.
We show that an optimal solution to the regularized training problem can be characterized as extreme points of a convex set.
In higher dimensions, we show that the training problem can be cast as a finite dimensional convex problem with infinitely many constraints.
arXiv Detail & Related papers (2020-02-25T23:05:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.