On regularized polynomial functional regression
- URL: http://arxiv.org/abs/2311.03036v2
- Date: Tue, 7 May 2024 09:38:40 GMT
- Title: On regularized polynomial functional regression
- Authors: Markus Holzleitner, Sergei Pereverzyev,
- Abstract summary: This article offers a comprehensive treatment of functional regression, culminating in the establishment of a novel finite sample bound.
In doing so, it extends and generalizes several findings from the context of linear functional regression as well.
- Score: 1.9718795899929253
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This article offers a comprehensive treatment of polynomial functional regression, culminating in the establishment of a novel finite sample bound. This bound encompasses various aspects, including general smoothness conditions, capacity conditions, and regularization techniques. In doing so, it extends and generalizes several findings from the context of linear functional regression as well. We also provide numerical evidence that using higher order polynomial terms can lead to an improved performance.
Related papers
- Exploring the Potential of Polynomial Basis Functions in Kolmogorov-Arnold Networks: A Comparative Study of Different Groups of Polynomials [0.0]
This paper presents a survey of 18 distincts and their potential applications in Gottliebmogorov Network (KAN) models.
The study aims to investigate the suitability of theseDistincts as basis functions in KAN models for complex tasks like handwritten digit classification.
The performance metrics of the KAN models, including overall accuracy, Kappa, and F1 score, are evaluated and compared.
arXiv Detail & Related papers (2024-05-30T20:40:16Z) - Multiparameter regularization and aggregation in the context of polynomial functional regression [2.1960518939650475]
We introduce an algorithm for multiple parameter regularization and present a theoretically grounded method for dealing with the associated parameters.
The efficacy of the proposed approach is assessed through evaluations on both synthetic and some real-world medical data, revealing promising results.
arXiv Detail & Related papers (2024-05-07T09:26:20Z) - Scaling and renormalization in high-dimensional regression [72.59731158970894]
This paper presents a succinct derivation of the training and generalization performance of a variety of high-dimensional ridge regression models.
We provide an introduction and review of recent results on these topics, aimed at readers with backgrounds in physics and deep learning.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - Structure of universal formulas [13.794391803767617]
We introduce a hierarchy of classes connecting the global approximability property to the weaker property of infinite VC dimension.
We show that fixed-size neural networks with not more than one layer of neurons having activations cannot approximate functions on arbitrary finite sets.
We give examples of functional families, including two-hidden-layer neural networks, that approximate functions on arbitrary finite sets, but fail to do that on the whole domain of definition.
arXiv Detail & Related papers (2023-11-07T11:50:25Z) - Convergence of Dynamics on Inductive Systems of Banach Spaces [68.8204255655161]
Examples are phase transitions in the thermodynamic limit, the emergence of classical mechanics from quantum theory at large action, and continuum quantum field theory arising from renormalization group fixed points.
We present a flexible modeling tool for the limit of theories: soft inductive limits constituting a generalization of inductive limits of Banach spaces.
arXiv Detail & Related papers (2023-06-28T09:52:20Z) - The Geometry of Memoryless Stochastic Policy Optimization in
Infinite-Horizon POMDPs [0.0]
We consider the problem of finding the best memoryless policy for an infinite-horizon partially observable decision process.
We show that the discounted state-action frequencies and the expected cumulative reward are the functions of the policy, whereby the degree is determined by the degree of partial observability.
arXiv Detail & Related papers (2021-10-14T14:42:09Z) - A block-sparse Tensor Train Format for sample-efficient high-dimensional
Polynomial Regression [0.0]
Low-rank tensors are an established framework for high-dimensionals problems.
We propose to extend this framework by including the concept of block-sparsity.
This allows us to adapt the ansatz space to align better with known sample results.
arXiv Detail & Related papers (2021-04-29T10:57:53Z) - Bilinear Classes: A Structural Framework for Provable Generalization in
RL [119.42509700822484]
Bilinear Classes is a new structural framework which permits generalization in reinforcement learning.
The framework incorporates nearly all existing models in which a sample complexity is achievable.
Our main result provides an RL algorithm which has sample complexity for Bilinear Classes.
arXiv Detail & Related papers (2021-03-19T16:34:20Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - A maximum-entropy approach to off-policy evaluation in average-reward
MDPs [54.967872716145656]
This work focuses on off-policy evaluation (OPE) with function approximation in infinite-horizon undiscounted Markov decision processes (MDPs)
We provide the first finite-sample OPE error bound, extending existing results beyond the episodic and discounted cases.
We show that this results in an exponential-family distribution whose sufficient statistics are the features, paralleling maximum-entropy approaches in supervised learning.
arXiv Detail & Related papers (2020-06-17T18:13:37Z) - A refinement of Reznick's Positivstellensatz with applications to
quantum information theory [72.8349503901712]
In Hilbert's 17th problem Artin showed that any positive definite in several variables can be written as the quotient of two sums of squares.
Reznick showed that the denominator in Artin's result can always be chosen as an $N$-th power of the squared norm of the variables.
arXiv Detail & Related papers (2019-09-04T11:46:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.