Regularized Nonlinear Regression for Simultaneously Selecting and
Estimating Key Model Parameters
- URL: http://arxiv.org/abs/2104.11426v1
- Date: Fri, 23 Apr 2021 06:17:57 GMT
- Title: Regularized Nonlinear Regression for Simultaneously Selecting and
Estimating Key Model Parameters
- Authors: Kyubaek Yoon, Hojun You, Wei-Ying Wu, Chae Young Lim, Jongeun Choi,
Connor Boss, Ahmed Ramadan, John M. Popovich Jr., Jacek Cholewicki, N. Peter
Reeves, Clark J. Radcliffe
- Abstract summary: In system identification, estimating parameters of a model using limited observations results in poor identifiability.
We propose a new method to simultaneously select and estimate sensitive parameters as key model parameters and fix the remaining parameters to a set of typical values.
- Score: 1.6122433144430324
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In system identification, estimating parameters of a model using limited
observations results in poor identifiability. To cope with this issue, we
propose a new method to simultaneously select and estimate sensitive parameters
as key model parameters and fix the remaining parameters to a set of typical
values. Our method is formulated as a nonlinear least squares estimator with
L1-regularization on the deviation of parameters from a set of typical values.
First, we provide consistency and oracle properties of the proposed estimator
as a theoretical foundation. Second, we provide a novel approach based on
Levenberg-Marquardt optimization to numerically find the solution to the
formulated problem. Third, to show the effectiveness, we present an application
identifying a biomechanical parametric model of a head position tracking task
for 10 human subjects from limited data. In a simulation study, the variances
of estimated parameters are decreased by 96.1% as compared to that of the
estimated parameters without L1-regularization. In an experimental study, our
method improves the model interpretation by reducing the number of parameters
to be estimated while maintaining variance accounted for (VAF) at above 82.5%.
Moreover, the variances of estimated parameters are reduced by 71.1% as
compared to that of the estimated parameters without L1-regularization. Our
method is 54 times faster than the standard simplex-based optimization to solve
the regularized nonlinear regression.
Related papers
- Multivariate root-n-consistent smoothing parameter free matching estimators and estimators of inverse density weighted expectations [51.000851088730684]
We develop novel modifications of nearest-neighbor and matching estimators which converge at the parametric $sqrt n $-rate.
We stress that our estimators do not involve nonparametric function estimators and in particular do not rely on sample-size dependent parameters smoothing.
arXiv Detail & Related papers (2024-07-11T13:28:34Z) - Scaling Exponents Across Parameterizations and Optimizers [94.54718325264218]
We propose a new perspective on parameterization by investigating a key assumption in prior work.
Our empirical investigation includes tens of thousands of models trained with all combinations of threes.
We find that the best learning rate scaling prescription would often have been excluded by the assumptions in prior work.
arXiv Detail & Related papers (2024-07-08T12:32:51Z) - Variational Bayesian surrogate modelling with application to robust design optimisation [0.9626666671366836]
Surrogate models provide a quick-to-evaluate approximation to complex computational models.
We consider Bayesian inference for constructing statistical surrogates with input uncertainties and dimensionality reduction.
We demonstrate intrinsic and robust structural optimisation problems where cost functions depend on a weighted sum of the mean and standard deviation of model outputs.
arXiv Detail & Related papers (2024-04-23T09:22:35Z) - Adaptive debiased machine learning using data-driven model selection
techniques [0.5735035463793007]
Adaptive Debiased Machine Learning (ADML) is a nonbiased framework that combines data-driven model selection and debiased machine learning techniques.
ADML avoids the bias introduced by model misspecification and remains free from the restrictions of parametric and semi models.
We provide a broad class of ADML estimators for estimating the average treatment effect in adaptive partially linear regression models.
arXiv Detail & Related papers (2023-07-24T06:16:17Z) - Active-Learning-Driven Surrogate Modeling for Efficient Simulation of
Parametric Nonlinear Systems [0.0]
In absence of governing equations, we need to construct the parametric reduced-order surrogate model in a non-intrusive fashion.
Our work provides a non-intrusive optimality criterion to efficiently populate the parameter snapshots.
We propose an active-learning-driven surrogate model using kernel-based shallow neural networks.
arXiv Detail & Related papers (2023-06-09T18:01:14Z) - An iterative multi-fidelity approach for model order reduction of
multi-dimensional input parametric PDE systems [0.0]
We propose a sampling parametric strategy for the reduction of large-scale PDE systems with multidimensional input parametric spaces.
It is achieved by exploiting low-fidelity models throughout the parametric space to sample points using an efficient sampling strategy.
Since the proposed methodology leverages the use of low-fidelity models to assimilate the solution database, it significantly reduces the computational cost in the offline stage.
arXiv Detail & Related papers (2023-01-23T15:25:58Z) - Time varying regression with hidden linear dynamics [74.9914602730208]
We revisit a model for time-varying linear regression that assumes the unknown parameters evolve according to a linear dynamical system.
Counterintuitively, we show that when the underlying dynamics are stable the parameters of this model can be estimated from data by combining just two ordinary least squares estimates.
arXiv Detail & Related papers (2021-12-29T23:37:06Z) - Support estimation in high-dimensional heteroscedastic mean regression [2.28438857884398]
We consider a linear mean regression model with random design and potentially heteroscedastic, heavy-tailed errors.
We use a strictly convex, smooth variant of the Huber loss function with tuning parameter depending on the parameters of the problem.
For the resulting estimator we show sign-consistency and optimal rates of convergence in the $ell_infty$ norm.
arXiv Detail & Related papers (2020-11-03T09:46:31Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - SUMO: Unbiased Estimation of Log Marginal Probability for Latent
Variable Models [80.22609163316459]
We introduce an unbiased estimator of the log marginal likelihood and its gradients for latent variable models based on randomized truncation of infinite series.
We show that models trained using our estimator give better test-set likelihoods than a standard importance-sampling based approach for the same average computational cost.
arXiv Detail & Related papers (2020-04-01T11:49:30Z) - Orthogonal Statistical Learning [49.55515683387805]
We provide non-asymptotic excess risk guarantees for statistical learning in a setting where the population risk depends on an unknown nuisance parameter.
We show that if the population risk satisfies a condition called Neymanity, the impact of the nuisance estimation error on the excess risk bound achieved by the meta-algorithm is of second order.
arXiv Detail & Related papers (2019-01-25T02:21:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.