Local Polynomial Lp-norm Regression
- URL: http://arxiv.org/abs/2504.18695v1
- Date: Fri, 25 Apr 2025 21:04:19 GMT
- Title: Local Polynomial Lp-norm Regression
- Authors: Ladan Tazik, James Stafford, John Braun,
- Abstract summary: Local least squares estimation cannot provide optimal results when non-Gaussian noise is present.<n>It is suggested that $L_p$-norm estimators be used to minimize the residuals when these exhibit non-normal kurtosis.<n>We show our method's superiority over local least squares in one-dimensional data and show promising outcomes for higher dimensions, specifically in 2D.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The local least squares estimator for a regression curve cannot provide optimal results when non-Gaussian noise is present. Both theoretical and empirical evidence suggests that residuals often exhibit distributional properties different from those of a normal distribution, making it worthwhile to consider estimation based on other norms. It is suggested that $L_p$-norm estimators be used to minimize the residuals when these exhibit non-normal kurtosis. In this paper, we propose a local polynomial $L_p$-norm regression that replaces weighted least squares estimation with weighted $L_p$-norm estimation for fitting the polynomial locally. We also introduce a new method for estimating the parameter $p$ from the residuals, enhancing the adaptability of the approach. Through numerical and theoretical investigation, we demonstrate our method's superiority over local least squares in one-dimensional data and show promising outcomes for higher dimensions, specifically in 2D.
Related papers
- Multivariate root-n-consistent smoothing parameter free matching estimators and estimators of inverse density weighted expectations [51.000851088730684]
We develop novel modifications of nearest-neighbor and matching estimators which converge at the parametric $sqrt n $-rate.<n>We stress that our estimators do not involve nonparametric function estimators and in particular do not rely on sample-size dependent parameters smoothing.
arXiv Detail & Related papers (2024-07-11T13:28:34Z) - Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.
We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - Mean-field variational inference with the TAP free energy: Geometric and
statistical properties in linear models [20.311583934266903]
We show that the landscape of the TAP free energy is strongly convex in an extensive neighborhood of a local minimizer.
We prove analogous properties for a local minimizer of the TAP free energy reachable by and show posterior inference based on this minimizer remains correctly.
arXiv Detail & Related papers (2023-11-14T17:35:01Z) - Efficient Minimax Optimal Estimators For Multivariate Convex Regression [1.583842747998493]
We present the first computationally efficient minimax optimal (up to logarithmic factors) estimators for the tasks of (i) $L$-Lipschitz convex regression (ii) $Gamma$-bounded convex regression undertopal support.
This work is the first to show the existence of efficient minimax optimal estimators for non-Donsker classes that their corresponding Least Squares Estimators are provably minimax sub-optimal.
arXiv Detail & Related papers (2022-05-06T17:04:05Z) - $p$-Generalized Probit Regression and Scalable Maximum Likelihood
Estimation via Sketching and Coresets [74.37849422071206]
We study the $p$-generalized probit regression model, which is a generalized linear model for binary responses.
We show how the maximum likelihood estimator for $p$-generalized probit regression can be approximated efficiently up to a factor of $(1+varepsilon)$ on large data.
arXiv Detail & Related papers (2022-03-25T10:54:41Z) - Random Forest Weighted Local Fréchet Regression with Random Objects [18.128663071848923]
We propose a novel random forest weighted local Fr'echet regression paradigm.<n>Our first method uses these weights as the local average to solve the conditional Fr'echet mean.<n>Second method performs local linear Fr'echet regression, both significantly improving existing Fr'echet regression methods.
arXiv Detail & Related papers (2022-02-10T09:10:59Z) - Improving Nonparametric Classification via Local Radial Regression with
an Application to Stock Prediction [16.000748943982494]
Well-known nonparametric kernel smoother and $k$-nearest neighbor ($k$-NN) estimators are consistent but biased particularly for a large radius of the ball.
This paper proposes a local radial regression (LRR) and its logistic regression variant called local radial logistic regression (LRLR), by combining the advantages of LPoR and MS-$k$-NN.
Our numerical experiments, including real-world datasets of daily stock indices, demonstrate that LRLR outperforms LPoR and MS-$k$NN.
arXiv Detail & Related papers (2021-12-28T00:32:02Z) - Optimal policy evaluation using kernel-based temporal difference methods [78.83926562536791]
We use kernel Hilbert spaces for estimating the value function of an infinite-horizon discounted Markov reward process.
We derive a non-asymptotic upper bound on the error with explicit dependence on the eigenvalues of the associated kernel operator.
We prove minimax lower bounds over sub-classes of MRPs.
arXiv Detail & Related papers (2021-09-24T14:48:20Z) - Near-optimal inference in adaptive linear regression [60.08422051718195]
Even simple methods like least squares can exhibit non-normal behavior when data is collected in an adaptive manner.
We propose a family of online debiasing estimators to correct these distributional anomalies in at least squares estimation.
We demonstrate the usefulness of our theory via applications to multi-armed bandit, autoregressive time series estimation, and active learning with exploration.
arXiv Detail & Related papers (2021-07-05T21:05:11Z) - Rao-Blackwellizing the Straight-Through Gumbel-Softmax Gradient
Estimator [93.05919133288161]
We show that the variance of the straight-through variant of the popular Gumbel-Softmax estimator can be reduced through Rao-Blackwellization.
This provably reduces the mean squared error.
We empirically demonstrate that this leads to variance reduction, faster convergence, and generally improved performance in two unsupervised latent variable models.
arXiv Detail & Related papers (2020-10-09T22:54:38Z) - Nearest Neighbour Based Estimates of Gradients: Sharp Nonasymptotic
Bounds and Applications [0.6445605125467573]
gradient estimation is of crucial importance in statistics and learning theory.
We consider here the classic regression setup, where a real valued square integrable r.v. $Y$ is to be predicted.
We prove nonasymptotic bounds improving upon those obtained for alternative estimation methods.
arXiv Detail & Related papers (2020-06-26T15:19:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.