Optimal Sampling Density for Nonparametric Regression
- URL: http://arxiv.org/abs/2105.11990v1
- Date: Tue, 25 May 2021 14:52:17 GMT
- Title: Optimal Sampling Density for Nonparametric Regression
- Authors: Danny Panknin and Shinichi Nakajima and Klaus Robert M\"uller
- Abstract summary: We propose a novel active learning strategy for regression, which is model-agnostic, robust against model mismatch, and interpretable.
We adopt the mean integrated error (MISE) as a generalization criterion, and use the behavior of the MISE as well as thelocally optimal bandwidths.
The almost model-free nature of our approach should encode raw properties of the target problem, and thus provide a robust and model-agnostic active learning strategy.
- Score: 5.3219212985943924
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel active learning strategy for regression, which is
model-agnostic, robust against model mismatch, and interpretable. Assuming that
a small number of initial samples are available, we derive the optimal training
density that minimizes the generalization error of local polynomial smoothing
(LPS) with its kernel bandwidth tuned locally: We adopt the mean integrated
squared error (MISE) as a generalization criterion, and use the asymptotic
behavior of the MISE as well as thelocally optimal bandwidths (LOB) -- the
bandwidth function that minimizes MISE in the asymptotic limit. The asymptotic
expression of our objective then reveals the dependence of the MISE on the
training density, enabling analytic minimization. As a result, we obtain the
optimal training density in a closed-form. The almost model-free nature of our
approach should encode raw properties of the target problem, and thus provide a
robust and model-agnostic active learning strategy. Furthermore, the obtained
training density factorizes the influence of local function complexity, noise
leveland test density in a transparent and interpretable way. We validate our
theory in numerical simulations, and show that the proposed active learning
method outperforms the existing state-of-the-art model-agnostic approaches.
Related papers
- Alternating Minimization Schemes for Computing Rate-Distortion-Perception Functions with $f$-Divergence Perception Constraints [10.564071872770146]
We study the computation of the rate-distortion-perception function (RDPF) for discrete memoryless sources.
We characterize the optimal parametric solutions.
We provide sufficient conditions on the distortion and the perception constraints.
arXiv Detail & Related papers (2024-08-27T12:50:12Z) - Model-Free Active Exploration in Reinforcement Learning [53.786439742572995]
We study the problem of exploration in Reinforcement Learning and present a novel model-free solution.
Our strategy is able to identify efficient policies faster than state-of-the-art exploration approaches.
arXiv Detail & Related papers (2024-06-30T19:00:49Z) - Conditional Pseudo-Reversible Normalizing Flow for Surrogate Modeling in Quantifying Uncertainty Propagation [11.874729463016227]
We introduce a conditional pseudo-reversible normalizing flow for constructing surrogate models of a physical model polluted by additive noise.
The training process utilizes dataset consisting of input-output pairs without requiring prior knowledge about the noise and the function.
Our model, once trained, can generate samples from any conditional probability density functions whose high probability regions are covered by the training set.
arXiv Detail & Related papers (2024-03-31T00:09:58Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Convergence of uncertainty estimates in Ensemble and Bayesian sparse
model discovery [4.446017969073817]
We show empirical success in terms of accuracy and robustness to noise with bootstrapping-based sequential thresholding least-squares estimator.
We show that this bootstrapping-based ensembling technique can perform a provably correct variable selection procedure with an exponential convergence rate of the error rate.
arXiv Detail & Related papers (2023-01-30T04:07:59Z) - Sampling with Mollified Interaction Energy Descent [57.00583139477843]
We present a new optimization-based method for sampling called mollified interaction energy descent (MIED)
MIED minimizes a new class of energies on probability measures called mollified interaction energies (MIEs)
We show experimentally that for unconstrained sampling problems our algorithm performs on par with existing particle-based algorithms like SVGD.
arXiv Detail & Related papers (2022-10-24T16:54:18Z) - Structured Optimal Variational Inference for Dynamic Latent Space Models [16.531262817315696]
We consider a latent space model for dynamic networks, where our objective is to estimate the pairwise inner products plus the intercept of the latent positions.
To balance posterior inference and computational scalability, we consider a structured mean-field variational inference framework.
arXiv Detail & Related papers (2022-09-29T22:10:42Z) - Heterogeneous Tensor Mixture Models in High Dimensions [5.656785831541303]
We consider the problem of jointly introducing a flexible high-dimensional tensor mixture model with heterogeneous covariances.
We show that our method converges geometrically to a neighborhood that is statistical of the true parameter.
Our analysis identifies important brain regions for diagnosis in an autism spectrum disorder.
arXiv Detail & Related papers (2021-04-15T21:06:16Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - Imitation with Neural Density Models [98.34503611309256]
We propose a new framework for Imitation Learning (IL) via density estimation of the expert's occupancy measure followed by Imitation Occupancy Entropy Reinforcement Learning (RL) using the density as a reward.
Our approach maximizes a non-adversarial model-free RL objective that provably lower bounds reverse Kullback-Leibler divergence between occupancy measures of the expert and imitator.
arXiv Detail & Related papers (2020-10-19T19:38:36Z) - Neural Control Variates [71.42768823631918]
We show that a set of neural networks can face the challenge of finding a good approximation of the integrand.
We derive a theoretically optimal, variance-minimizing loss function, and propose an alternative, composite loss for stable online training in practice.
Specifically, we show that the learned light-field approximation is of sufficient quality for high-order bounces, allowing us to omit the error correction and thereby dramatically reduce the noise at the cost of negligible visible bias.
arXiv Detail & Related papers (2020-06-02T11:17:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.