Neural Operators Can Discover Functional Clusters
- URL: http://arxiv.org/abs/2602.23528v1
- Date: Thu, 26 Feb 2026 22:20:34 GMT
- Title: Neural Operators Can Discover Functional Clusters
- Authors: Yicen Li, Jose Antonio Lara Benitez, Ruiyang Hong, Anastasis Kratsios, Paul David McNicholas, Maarten Valentijn de Hoop,
- Abstract summary: We show that sample-based neural operators can learn any finite collection of classes in an infinite-dimensional reproducing kernel Hilbert space.<n>We develop an NO-powered clustering pipeline for functional data and apply it to unlabeled families of ordinary differential equation (ODE) trajectories.
- Score: 9.0267232149083
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Operator learning is reshaping scientific computing by amortizing inference across infinite families of problems. While neural operators (NOs) are increasingly well understood for regression, far less is known for classification and its unsupervised analogue: clustering. We prove that sample-based neural operators can learn any finite collection of classes in an infinite-dimensional reproducing kernel Hilbert space, even when the classes are neither convex nor connected, under mild kernel sampling assumptions. Our universal clustering theorem shows that any $K$ closed classes can be approximated to arbitrary precision by NO-parameterized classes in the upper Kuratowski topology on closed sets, a notion that can be interpreted as disallowing false-positive misclassifications. Building on this, we develop an NO-powered clustering pipeline for functional data and apply it to unlabeled families of ordinary differential equation (ODE) trajectories. Discretized trajectories are lifted by a fixed pre-trained encoder into a continuous feature map and mapped to soft assignments by a lightweight trainable head. Experiments on diverse synthetic ODE benchmarks show that the resulting practical SNO recovers latent dynamical structure in regimes where classical methods fail, providing evidence consistent with our universal clustering theory.
Related papers
- Neural-POD: A Plug-and-Play Neural Operator Framework for Infinite-Dimensional Functional Nonlinear Proper Orthogonal Decomposition [7.1950116347185995]
We propose the Neural Proper Orthogonal Decomposition (Neural-POD), a plug-and-play neural operator framework.<n>Neural-POD formulates basis construction as a sequence of residual minimization problems solved through neural network training.<n>We demonstrate the robustness of Neural-POD with different complex resolutions, including the Burgers' and NavierStokes equations.
arXiv Detail & Related papers (2026-02-17T15:01:40Z) - Theory-to-Practice Gap for Neural Networks and Neural Operators [6.267574471145217]
We study the sampling complexity of learning with ReLU neural networks and neural operators.<n>We show that the best-possible convergence rate in a Bochner $Lp$-norm is bounded by Monte-Carlo rates of order $1/p$.
arXiv Detail & Related papers (2025-03-23T21:45:58Z) - Convergence analysis of wide shallow neural operators within the framework of Neural Tangent Kernel [4.313136216120379]
We conduct the convergence analysis of gradient descent for the wide shallow neural operators and physics-informed shallow neural operators within the framework of Neural Tangent Kernel (NTK)<n>Under the setting of over-parametrization, gradient descent can find the global minimum regardless of whether it is in continuous time or discrete time.
arXiv Detail & Related papers (2024-12-07T05:47:28Z) - Can neural operators always be continuously discretized? [7.37972671531752]
We consider the problem of discretization of neural operators between Hilbert spaces in a general framework including skip connections.<n>We show that bilipschitz neural operators may always be written in the form of an alternating composition of strongly monotone neural operators.<n>We also show that neural operators of this type may be approximated through the composition of finite-rank residual neural operators.
arXiv Detail & Related papers (2024-12-04T15:22:54Z) - Neural Operators with Localized Integral and Differential Kernels [77.76991758980003]
We present a principled approach to operator learning that can capture local features under two frameworks.
We prove that we obtain differential operators under an appropriate scaling of the kernel values of CNNs.
To obtain local integral operators, we utilize suitable basis representations for the kernels based on discrete-continuous convolutions.
arXiv Detail & Related papers (2024-02-26T18:59:31Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Equivariance with Learned Canonicalization Functions [77.32483958400282]
We show that learning a small neural network to perform canonicalization is better than using predefineds.
Our experiments show that learning the canonicalization function is competitive with existing techniques for learning equivariant functions across many tasks.
arXiv Detail & Related papers (2022-11-11T21:58:15Z) - Designing Universal Causal Deep Learning Models: The Case of Infinite-Dimensional Dynamical Systems from Stochastic Analysis [7.373617024876726]
Several non-linear operators in analysis depend on a temporal structure which is not leveraged by contemporary neural operators.<n>This paper introduces a deep learning model-design framework that takes suitable infinite-dimensional linear metric spaces.<n>We show that our framework can uniformly approximate on compact sets and across arbitrarily finite-time horizons H" or smooth trace class operators.
arXiv Detail & Related papers (2022-10-24T14:43:03Z) - Neural Operator: Learning Maps Between Function Spaces [75.93843876663128]
We propose a generalization of neural networks to learn operators, termed neural operators, that map between infinite dimensional function spaces.
We prove a universal approximation theorem for our proposed neural operator, showing that it can approximate any given nonlinear continuous operator.
An important application for neural operators is learning surrogate maps for the solution operators of partial differential equations.
arXiv Detail & Related papers (2021-08-19T03:56:49Z) - On Function Approximation in Reinforcement Learning: Optimism in the
Face of Large State Spaces [208.67848059021915]
We study the exploration-exploitation tradeoff at the core of reinforcement learning.
In particular, we prove that the complexity of the function class $mathcalF$ characterizes the complexity of the function.
Our regret bounds are independent of the number of episodes.
arXiv Detail & Related papers (2020-11-09T18:32:22Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.