Triangulating PL functions and the existence of efficient ReLU DNNs
- URL: http://arxiv.org/abs/2505.07137v1
- Date: Sun, 11 May 2025 22:20:16 GMT
- Title: Triangulating PL functions and the existence of efficient ReLU DNNs
- Authors: Danny Calegari,
- Abstract summary: We show that every piecewise linear function $f:Rd to R$ with compact support a polyhedron $P$ has a representation as a sum of so-called simplex functions'
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: We show that every piecewise linear function $f:R^d \to R$ with compact support a polyhedron $P$ has a representation as a sum of so-called `simplex functions'. Such representations arise from degree 1 triangulations of the relative homology class (in $R^{d+1}$) bounded by $P$ and the graph of $f$, and give a short elementary proof of the existence of efficient universal ReLU neural networks that simultaneously compute all such functions $f$ of bounded complexity.
Related papers
- $p$-Adic Polynomial Regression as Alternative to Neural Network for Approximating $p$-Adic Functions of Many Variables [55.2480439325792]
A regression model is constructed that allows approximating continuous functions with any degree of accuracy.<n>The proposed model can be considered as a simple alternative to possible $p$-adic models based on neural network architecture.
arXiv Detail & Related papers (2025-03-30T15:42:08Z) - Neural Networks and (Virtual) Extended Formulations [5.762677915745415]
We prove lower bounds on the size of neural networks that optimize over $P$.<n>We show that $mathrmxc(P)$ is a lower bound on the size of any monotone or input neural network that solves the linear optimization problem over $P$.
arXiv Detail & Related papers (2024-11-05T11:12:11Z) - Representing Piecewise-Linear Functions by Functions with Minimal Arity [0.5266869303483376]
We show that the tessellation of the input space $mathbbRn$ induced by the function $F$ has a direct connection to the number of arguments in the $max$ functions.
arXiv Detail & Related papers (2024-06-04T15:39:08Z) - A Minimal Control Family of Dynamical Systems for Universal Approximation [5.217870815854702]
The universal approximation property (UAP) holds a fundamental position in deep learning.<n>We show that it can approximate continuous functions on compact domains.<n>Our results reveal an underlying connection between the approximation power of neural networks and control systems.
arXiv Detail & Related papers (2023-12-20T10:36:55Z) - Shallow neural network representation of polynomials [91.3755431537592]
We show that $d$-variables of degreeR$ can be represented on $[0,1]d$ as shallow neural networks of width $d+1+sum_r=2Rbinomr+d-1d-1d-1[binomr+d-1d-1d-1[binomr+d-1d-1d-1[binomr+d-1d-1d-1d-1[binomr+d-1d-1d-1d-1
arXiv Detail & Related papers (2022-08-17T08:14:52Z) - Dist2Cycle: A Simplicial Neural Network for Homology Localization [66.15805004725809]
Simplicial complexes can be viewed as high dimensional generalizations of graphs that explicitly encode multi-way ordered relations.
We propose a graph convolutional model for learning functions parametrized by the $k$-homological features of simplicial complexes.
arXiv Detail & Related papers (2021-10-28T14:59:41Z) - On minimal representations of shallow ReLU networks [0.0]
We show that the minimal representation for $f$ uses either $n$, $n+1$ or $n+2$ neurons.
In particular, where the input layer is one-dimensional, minimal representations always use at most $n+1$ neurons but in all higher dimensional settings there are functions for which $n+2$ neurons are needed.
arXiv Detail & Related papers (2021-08-12T10:22:24Z) - Quantitative approximation results for complex-valued neural networks [0.0]
We show that complex-valued neural networks with the modReLU activation function $sigma(z) = mathrmReLU(|z|) can uniformly approximate complex-valued functions of regularity $Cn$ on compact subsets of $mathbbCd$, giving explicit bounds on the approximation rate.
arXiv Detail & Related papers (2021-02-25T18:57:58Z) - Finding Global Minima via Kernel Approximations [90.42048080064849]
We consider the global minimization of smooth functions based solely on function evaluations.
In this paper, we consider an approach that jointly models the function to approximate and finds a global minimum.
arXiv Detail & Related papers (2020-12-22T12:59:30Z) - On Function Approximation in Reinforcement Learning: Optimism in the
Face of Large State Spaces [208.67848059021915]
We study the exploration-exploitation tradeoff at the core of reinforcement learning.
In particular, we prove that the complexity of the function class $mathcalF$ characterizes the complexity of the function.
Our regret bounds are independent of the number of episodes.
arXiv Detail & Related papers (2020-11-09T18:32:22Z) - On the Modularity of Hypernetworks [103.1147622394852]
We show that for a structured target function, the overall number of trainable parameters in a hypernetwork is smaller by orders of magnitude than the number of trainable parameters of a standard neural network and an embedding method.
arXiv Detail & Related papers (2020-02-23T22:51:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.