Asymptotic Theory of $\ell_1$-Regularized PDE Identification from a
Single Noisy Trajectory
- URL: http://arxiv.org/abs/2103.07045v1
- Date: Fri, 12 Mar 2021 02:23:04 GMT
- Title: Asymptotic Theory of $\ell_1$-Regularized PDE Identification from a
Single Noisy Trajectory
- Authors: Yuchen He, Namjoon Suh, Xiaoming Huo, Sungha Kang, Yajun Mei
- Abstract summary: We prove the support recovery for a general class of linear and nonlinear evolutionary partial differential equation (PDE) identification from a single noisy trajectory.
We provide a set of sufficient conditions which guarantee that, from a single trajectory data denoised by a Local-Polynomial filter, the support of $mathbfc(lambda)$ally converges to the true signed-support associated with the underlying PDE.
- Score: 2.0299248281970956
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We prove the support recovery for a general class of linear and nonlinear
evolutionary partial differential equation (PDE) identification from a single
noisy trajectory using $\ell_1$ regularized Pseudo-Least Squares
model~($\ell_1$-PsLS). In any associative $\mathbb{R}$-algebra generated by
finitely many differentiation operators that contain the unknown PDE operator,
applying $\ell_1$-PsLS to a given data set yields a family of candidate models
with coefficients $\mathbf{c}(\lambda)$ parameterized by the regularization
weight $\lambda\geq 0$. The trace of $\{\mathbf{c}(\lambda)\}_{\lambda\geq 0}$
suffers from high variance due to data noises and finite difference
approximation errors. We provide a set of sufficient conditions which guarantee
that, from a single trajectory data denoised by a Local-Polynomial filter, the
support of $\mathbf{c}(\lambda)$ asymptotically converges to the true
signed-support associated with the underlying PDE for sufficiently many data
and a certain range of $\lambda$. We also show various numerical experiments to
validate our theory.
Related papers
- Near-Optimal and Tractable Estimation under Shift-Invariance [0.21756081703275998]
Class of all such signals is but extremely rich: it contains all exponential oscillations over $mathbbCn$ with total degree $s$.
We show that the statistical complexity of this class, as measured by the radius squared minimax frequencies of the $(delta)$-confidence $ell$-ball, is nearly the same as for the class of $s$-sparse signals, namely $Oleft(slog(en) + log(delta-1)right) cdot log(en/s)
arXiv Detail & Related papers (2024-11-05T18:11:23Z) - Statistical-Computational Trade-offs for Density Estimation [60.81548752871115]
We show that for a broad class of data structures their bounds cannot be significantly improved.
This is a novel emphstatistical-computational trade-off for density estimation.
arXiv Detail & Related papers (2024-10-30T15:03:33Z) - Data subsampling for Poisson regression with pth-root-link [53.63838219437508]
We develop and analyze data subsampling techniques for Poisson regression.
In particular, we consider the Poisson generalized linear model with ID- and square root-link functions.
arXiv Detail & Related papers (2024-10-30T10:09:05Z) - A computational transition for detecting correlated stochastic block models by low-degree polynomials [13.396246336911842]
Detection of correlation in a pair of random graphs is a fundamental statistical and computational problem that has been extensively studied in recent years.
We consider a pair of correlated block models $mathcalS(n,tfraclambdan;k,epsilon;s)$ that are subsampled from a common parent block model $mathcal S(n,tfraclambdan;k,epsilon;s)$
We focus on tests based on emphlow-degrees of the entries of the adjacency
arXiv Detail & Related papers (2024-09-02T06:14:05Z) - Neural network learns low-dimensional polynomials with SGD near the information-theoretic limit [75.4661041626338]
We study the problem of gradient descent learning of a single-index target function $f_*(boldsymbolx) = textstylesigma_*left(langleboldsymbolx,boldsymbolthetarangleright)$ under isotropic Gaussian data.
We prove that a two-layer neural network optimized by an SGD-based algorithm learns $f_*$ of arbitrary link function with a sample and runtime complexity of $n asymp T asymp C(q) cdot d
arXiv Detail & Related papers (2024-06-03T17:56:58Z) - Further Understanding of a Local Gaussian Process Approximation: Characterising Convergence in the Finite Regime [1.3518297878940662]
We show that common choices of kernel functions for a highly accurate and massively scalable GPnn regression model exhibit gradual convergence to behaviour as dataset-size $n$ increases.
Similar bounds can be found under model misspecification and combined to give overall rates of convergence of both MSE and an important calibration metric.
arXiv Detail & Related papers (2024-04-09T10:47:01Z) - A Unified Framework for Uniform Signal Recovery in Nonlinear Generative
Compressed Sensing [68.80803866919123]
Under nonlinear measurements, most prior results are non-uniform, i.e., they hold with high probability for a fixed $mathbfx*$ rather than for all $mathbfx*$ simultaneously.
Our framework accommodates GCS with 1-bit/uniformly quantized observations and single index models as canonical examples.
We also develop a concentration inequality that produces tighter bounds for product processes whose index sets have low metric entropy.
arXiv Detail & Related papers (2023-09-25T17:54:19Z) - Learning a Single Neuron with Adversarial Label Noise via Gradient
Descent [50.659479930171585]
We study a function of the form $mathbfxmapstosigma(mathbfwcdotmathbfx)$ for monotone activations.
The goal of the learner is to output a hypothesis vector $mathbfw$ that $F(mathbbw)=C, epsilon$ with high probability.
arXiv Detail & Related papers (2022-06-17T17:55:43Z) - Learning elliptic partial differential equations with randomized linear
algebra [2.538209532048867]
We show that one can construct an approximant to $G$ that converges almost surely.
The quantity $0Gamma_epsilonleq 1$ characterizes the quality of the training dataset.
arXiv Detail & Related papers (2021-01-31T16:57:59Z) - Agnostic Learning of a Single Neuron with Gradient Descent [92.7662890047311]
We consider the problem of learning the best-fitting single neuron as measured by the expected square loss.
For the ReLU activation, our population risk guarantee is $O(mathsfOPT1/2)+epsilon$.
For the ReLU activation, our population risk guarantee is $O(mathsfOPT1/2)+epsilon$.
arXiv Detail & Related papers (2020-05-29T07:20:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.