Interactions that reshape the interfaces of the interacting parties
- URL: http://arxiv.org/abs/2602.17917v1
- Date: Fri, 20 Feb 2026 00:25:26 GMT
- Title: Interactions that reshape the interfaces of the interacting parties
- Authors: David I. Spivak,
- Abstract summary: Polynomial functors model systems with interfaces: each specifies the outputs a system can produce and, for each output, the inputs it accepts.<n>We construct a category $mathbbOmathbfrgTr$ of trees, with coinductively-defined morphisms, tensor product, and internal hom.<n>We illustrate a notion of progressive generative adversarial networks, where feedback determines when the image-generation interface grows to a higher resolution.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Polynomial functors model systems with interfaces: each polynomial specifies the outputs a system can produce and, for each output, the inputs it accepts. The bicategory $\mathbb{O}\mathbf{rg}$ of dynamic organizations \cite{spivak2021learners} gives a notion of state-driven interaction patterns that evolves over time, but each system's interface remains fixed throughout the interaction. Yet in many systems, the outputs sent and inputs received can reshape the interface itself: a cell differentiating in response to chemical signals gains or loses receptors; a sensor damaged by its input loses a channel; a neural network may grow its output resolution during training. Here we introduce *polynomial trees*, elements of the terminal $(u\triangleleft u)$-coalgebra where $u$ is the polynomial associated to a universe of sets, to model such systems: a polynomial tree is a coinductive tree whose nodes carry polynomials, and in which each round of interaction -- an output chosen and an input received -- determines a child tree, hence the next interface. We construct a monoidal closed category $\mathbf{PolyTr}$ of polynomial trees, with coinductively-defined morphisms, tensor product, and internal hom. We then build a bicategory $\mathbb{O}\mathbf{rgTr}$ generalizing $\mathbb{O}\mathbf{rg}$, whose hom-categories parametrize morphisms by state sets with coinductive action-and-update data. We provide a locally fully faithful functor $\mathbb{O}\mathbf{rg}\to\mathbb{O}\mathbf{rgTr}$ via constant trees, those for which the interfaces do not change through time. We illustrate the generalization by suggesting a notion of progressive generative adversarial networks, where gradient feedback determines when the image-generation interface grows to a higher resolution.
Related papers
- $p$-Adic Polynomial Regression as Alternative to Neural Network for Approximating $p$-Adic Functions of Many Variables [55.2480439325792]
A regression model is constructed that allows approximating continuous functions with any degree of accuracy.<n>The proposed model can be considered as a simple alternative to possible $p$-adic models based on neural network architecture.
arXiv Detail & Related papers (2025-03-30T15:42:08Z) - A Quotient Homology Theory of Representation in Neural Networks [0.0]
We prove that if the intersections between each polyhedron and an input manifold are convex, the homology groups of neural representations are isomorphic to quotient homology groups $H_k(Phi(mathcalM)) simeq H_k(mathcalM/mathcalO_Phi)$.<n>We develop methods to compute the overlap decomposition through linear programming and a union-find algorithm.
arXiv Detail & Related papers (2025-02-03T13:52:17Z) - Polynomial Threshold Functions of Bounded Tree-Width: Some Explainability and Complexity Aspects [0.6554326244334868]
The tree-width of a multivariate is the tree-width of the hypergraph with hyperedges corresponding to its terms.<n>A representation of a Boolean function as the sign of a bounded tree-width is called a threshold representation.
arXiv Detail & Related papers (2025-01-14T18:28:08Z) - Neural network learns low-dimensional polynomials with SGD near the information-theoretic limit [75.4661041626338]
We study the problem of gradient descent learning of a single-index target function $f_*(boldsymbolx) = textstylesigma_*left(langleboldsymbolx,boldsymbolthetarangleright)$<n>We prove that a two-layer neural network optimized by an SGD-based algorithm learns $f_*$ with a complexity that is not governed by information exponents.
arXiv Detail & Related papers (2024-06-03T17:56:58Z) - Learning Hierarchical Polynomials with Three-Layer Neural Networks [56.71223169861528]
We study the problem of learning hierarchical functions over the standard Gaussian distribution with three-layer neural networks.
For a large subclass of degree $k$s $p$, a three-layer neural network trained via layerwise gradientp descent on the square loss learns the target $h$ up to vanishing test error.
This work demonstrates the ability of three-layer neural networks to learn complex features and as a result, learn a broad class of hierarchical functions.
arXiv Detail & Related papers (2023-11-23T02:19:32Z) - Universal Representation of Permutation-Invariant Functions on Vectors
and Tensors [11.345796608258434]
A main object of our study is multiset functions -- that is, permutation-invariant functions over inputs of varying sizes.
Deep Sets, proposed by citezaheer 2017deep, provides a emphuniversal representation for continuous multiset functions on scalars via a sum-decomposable model.
We prove that universal representation is guaranteed for continuous and discontinuous multiset functions though a latent space dimension of $O(ND)$.
arXiv Detail & Related papers (2023-10-20T22:00:59Z) - Shallow neural network representation of polynomials [91.3755431537592]
We show that $d$-variables of degreeR$ can be represented on $[0,1]d$ as shallow neural networks of width $d+1+sum_r=2Rbinomr+d-1d-1d-1[binomr+d-1d-1d-1[binomr+d-1d-1d-1[binomr+d-1d-1d-1d-1[binomr+d-1d-1d-1d-1
arXiv Detail & Related papers (2022-08-17T08:14:52Z) - Training Fully Connected Neural Networks is $\exists\mathbb{R}$-Complete [4.170994234169557]
We consider the problem of finding weights and biases for a two-layer fully connected neural network to fit a given set of data points as well as possible, also known as EmpiricalRiskmization.
We prove that algebra numbers of arbitrarily large degree are required as weights to be able to train some instances to optimality, even if all data points are rational.
A consequence of this is that a search algorithm like the one by Basu, Mianjy and Mukherjee [ICLR 2018] is impossible for networks with more than one output dimension, unless $mathsfNP=exists
arXiv Detail & Related papers (2022-04-04T10:28:11Z) - Learners' Languages [0.0]
In "Backprop as functor", the authors show that the fundamental elements of deep learning -- descent and backpropagation -- can be conceptualized as a strong monoidal functor Para(Euc)$to$Learn.<n>In this note, we observe that Slens is a full subcategory of Poly, the category of functors in one variable, via the functor $Amapsto AyA$.
arXiv Detail & Related papers (2021-03-01T18:34:00Z) - Learning Over-Parametrized Two-Layer ReLU Neural Networks beyond NTK [58.5766737343951]
We consider the dynamic of descent for learning a two-layer neural network.
We show that an over-parametrized two-layer neural network can provably learn with gradient loss at most ground with Tangent samples.
arXiv Detail & Related papers (2020-07-09T07:09:28Z) - Deep Polynomial Neural Networks [77.70761658507507]
$Pi$Nets are a new class of function approximators based on expansions.
$Pi$Nets produce state-the-art results in three challenging tasks, i.e. image generation, face verification and 3D mesh representation learning.
arXiv Detail & Related papers (2020-06-20T16:23:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.