Sparse-penalized deep neural networks estimator under weak dependence
- URL: http://arxiv.org/abs/2303.01406v1
- Date: Thu, 2 Mar 2023 16:53:51 GMT
- Title: Sparse-penalized deep neural networks estimator under weak dependence
- Authors: William Kengne and Modou Wade
- Abstract summary: We consider the nonparametric regression and the classification problems for $psi$-weakly dependent processes.
A penalized estimation method for sparse deep neural networks is performed.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the nonparametric regression and the classification problems for
$\psi$-weakly dependent processes. This weak dependence structure is more
general than conditions such as, mixing, association, $\ldots$. A penalized
estimation method for sparse deep neural networks is performed. In both
nonparametric regression and binary classification problems, we establish
oracle inequalities for the excess risk of the sparse-penalized deep neural
networks estimators. Convergence rates of the excess risk of these estimators
are also derived. The simulation results displayed show that, the proposed
estimators overall work well than the non penalized estimators.
Related papers
- Multivariate root-n-consistent smoothing parameter free matching estimators and estimators of inverse density weighted expectations [51.000851088730684]
We develop novel modifications of nearest-neighbor and matching estimators which converge at the parametric $sqrt n $-rate.
We stress that our estimators do not involve nonparametric function estimators and in particular do not rely on sample-size dependent parameters smoothing.
arXiv Detail & Related papers (2024-07-11T13:28:34Z) - Robust deep learning from weakly dependent data [0.0]
This paper considers robust deep learning from weakly dependent observations, with unbounded loss function and unbounded input/output.
We derive a relationship between these bounds and $r$, and when the data have moments of any order (that is $r=infty$), the convergence rate is close to some well-known results.
arXiv Detail & Related papers (2024-05-08T14:25:40Z) - Sup-Norm Convergence of Deep Neural Network Estimator for Nonparametric
Regression by Adversarial Training [5.68558935178946]
We show the sup-norm convergence of deep neural network estimators with a novel adversarial training scheme.
A deep neural network estimator achieves the optimal rate in the sup-norm sense by the proposed adversarial training with correction.
arXiv Detail & Related papers (2023-07-08T20:24:14Z) - Penalized deep neural networks estimator with general loss functions
under weak dependence [0.0]
This paper carries out sparse-penalized deep neural networks predictors for learning weakly dependent processes.
Some simulation results are provided, and application to the forecast of the particulate matter in the Vit'oria metropolitan area is also considered.
arXiv Detail & Related papers (2023-05-10T15:06:53Z) - Semantic Strengthening of Neuro-Symbolic Learning [85.6195120593625]
Neuro-symbolic approaches typically resort to fuzzy approximations of a probabilistic objective.
We show how to compute this efficiently for tractable circuits.
We test our approach on three tasks: predicting a minimum-cost path in Warcraft, predicting a minimum-cost perfect matching, and solving Sudoku puzzles.
arXiv Detail & Related papers (2023-02-28T00:04:22Z) - Deep learning for $\psi$-weakly dependent processes [0.0]
We perform deep neural networks for learning $psi$-weakly dependent processes.
The consistency of the empirical risk minimization algorithm in the class of deep neural networks predictors is established.
Some simulation results are provided, as well as an application to the US recession data.
arXiv Detail & Related papers (2023-02-01T09:31:15Z) - Estimation of Non-Crossing Quantile Regression Process with Deep ReQU
Neural Networks [5.5272015676880795]
We propose a penalized nonparametric approach to estimating the quantile regression process (QRP) in a nonseparable model using quadratic unit (ReQU) activated deep neural networks.
We establish the non-asymptotic excess risk bounds for the estimated QRP and derive the mean integrated squared error for the estimated QRP under mild smoothness and regularity conditions.
arXiv Detail & Related papers (2022-07-21T12:26:45Z) - Sample Complexity of Nonparametric Off-Policy Evaluation on
Low-Dimensional Manifolds using Deep Networks [71.95722100511627]
We consider the off-policy evaluation problem of reinforcement learning using deep neural networks.
We show that, by choosing network size appropriately, one can leverage the low-dimensional manifold structure in the Markov decision process.
arXiv Detail & Related papers (2022-06-06T20:25:20Z) - The Interplay Between Implicit Bias and Benign Overfitting in Two-Layer
Linear Networks [51.1848572349154]
neural network models that perfectly fit noisy data can generalize well to unseen test data.
We consider interpolating two-layer linear neural networks trained with gradient flow on the squared loss and derive bounds on the excess risk.
arXiv Detail & Related papers (2021-08-25T22:01:01Z) - Towards an Understanding of Benign Overfitting in Neural Networks [104.2956323934544]
Modern machine learning models often employ a huge number of parameters and are typically optimized to have zero training loss.
We examine how these benign overfitting phenomena occur in a two-layer neural network setting.
We show that it is possible for the two-layer ReLU network interpolator to achieve a near minimax-optimal learning rate.
arXiv Detail & Related papers (2021-06-06T19:08:53Z) - Online nonparametric regression with Sobolev kernels [99.12817345416846]
We derive the regret upper bounds on the classes of Sobolev spaces $W_pbeta(mathcalX)$, $pgeq 2, beta>fracdp$.
The upper bounds are supported by the minimax regret analysis, which reveals that in the cases $beta> fracd2$ or $p=infty$ these rates are (essentially) optimal.
arXiv Detail & Related papers (2021-02-06T15:05:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.