Fading memory and the convolution theorem
- URL: http://arxiv.org/abs/2408.07386v1
- Date: Wed, 14 Aug 2024 09:06:25 GMT
- Title: Fading memory and the convolution theorem
- Authors: Juan-Pablo Ortega, Florian Rossmannek,
- Abstract summary: topological and analytical notions of continuity and fading memory for causal and time-invariant filters are introduced.
Main theorem shows that the availability of convolution representations can be characterized, at least when the codomain is finite-dimensional.
When the input space and the codomain of a linear functional are Hilbert spaces, it is shown that minimal continuity and the minimal fading memory property guarantee the existence of interesting embeddings of the associated kernel Hilbert spaces.
- Score: 5.248564173595025
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Several topological and analytical notions of continuity and fading memory for causal and time-invariant filters are introduced, and the relations between them are analysed. A significant generalization of the convolution theorem that establishes the equivalence between the fading memory property and the availability of convolution representations of linear filters is proved. This result extends a previous such characterization to a complete array of weighted norms in the definition of the fading memory property. Additionally, the main theorem shows that the availability of convolution representations can be characterized, at least when the codomain is finite-dimensional, not only by the fading memory property but also by the reunion of two purely topological notions that are called minimal continuity and minimal fading memory property. Finally, when the input space and the codomain of a linear functional are Hilbert spaces, it is shown that minimal continuity and the minimal fading memory property guarantee the existence of interesting embeddings of the associated reproducing kernel Hilbert spaces and approximation results of solutions of kernel regressions in the presence of finite data sets.
Related papers
- A nonlinear elasticity model in computer vision [0.0]
The purpose of this paper is to analyze a nonlinear elasticity model previously introduced by the authors for comparing two images.
The existence of transformations is proved among derivatives of $-valued pairs of gradient vector-valued intensity maps.
The question is as to whether for images related by a linear mapping the uniquer is given by that.
arXiv Detail & Related papers (2024-08-30T12:27:22Z) - Beyond Log-Concavity: Theory and Algorithm for Sum-Log-Concave
Optimization [0.0]
We show that functions in general not convex but still satisfy generalized convexity inequalities.
We propose the Cross Gradient Descent (XGD) algorithm moving in the opposite direction of the cross-gradient.
We introduce the so-called checkered regression method relying on a sum-log-concave function.
arXiv Detail & Related papers (2023-09-26T22:22:45Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Statistical Optimality of Divide and Conquer Kernel-based Functional
Linear Regression [1.7227952883644062]
This paper studies the convergence performance of divide-and-conquer estimators in the scenario that the target function does not reside in the underlying kernel space.
As a decomposition-based scalable approach, the divide-and-conquer estimators of functional linear regression can substantially reduce the algorithmic complexities in time and memory.
arXiv Detail & Related papers (2022-11-20T12:29:06Z) - Log-linear Guardedness and its Implications [116.87322784046926]
Methods for erasing human-interpretable concepts from neural representations that assume linearity have been found to be tractable and useful.
This work formally defines the notion of log-linear guardedness as the inability of an adversary to predict the concept directly from the representation.
We show that, in the binary case, under certain assumptions, a downstream log-linear model cannot recover the erased concept.
arXiv Detail & Related papers (2022-10-18T17:30:02Z) - Neural and spectral operator surrogates: unified construction and
expression rate bounds [0.46040036610482665]
We study approximation rates for deep surrogates of maps between infinite-dimensional function spaces.
Operator in- and outputs from function spaces are assumed to be parametrized by stable, affine representation systems.
arXiv Detail & Related papers (2022-07-11T15:35:14Z) - Reinforcement Learning from Partial Observation: Linear Function Approximation with Provable Sample Efficiency [111.83670279016599]
We study reinforcement learning for partially observed decision processes (POMDPs) with infinite observation and state spaces.
We make the first attempt at partial observability and function approximation for a class of POMDPs with a linear structure.
arXiv Detail & Related papers (2022-04-20T21:15:38Z) - On Linear Separability under Linear Compression with Applications to
Hard Support Vector Machine [0.0]
We show that linear separability is maintained as long as the distortion of the inner products is smaller than the squared margin of the original data-generating distribution.
As applications, we derive bounds on the (i) compression length of random sub-Gaussian matrices; and (ii) generalization error for compressive learning with hard-SVM.
arXiv Detail & Related papers (2022-02-02T16:23:01Z) - Lifting the Convex Conjugate in Lagrangian Relaxations: A Tractable
Approach for Continuous Markov Random Fields [53.31927549039624]
We show that a piecewise discretization preserves better contrast from existing discretization problems.
We apply this theory to the problem of matching two images.
arXiv Detail & Related papers (2021-07-13T12:31:06Z) - On dissipative symplectic integration with applications to
gradient-based optimization [77.34726150561087]
We propose a geometric framework in which discretizations can be realized systematically.
We show that a generalization of symplectic to nonconservative and in particular dissipative Hamiltonian systems is able to preserve rates of convergence up to a controlled error.
arXiv Detail & Related papers (2020-04-15T00:36:49Z) - Quantum Geometric Confinement and Dynamical Transmission in Grushin
Cylinder [68.8204255655161]
We classify the self-adjoint realisations of the Laplace-Beltrami operator minimally defined on an infinite cylinder.
We retrieve those distinguished extensions previously identified in the recent literature, namely the most confining and the most transmitting.
arXiv Detail & Related papers (2020-03-16T11:37:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.