Active Nearest Neighbor Regression Through Delaunay Refinement
- URL: http://arxiv.org/abs/2206.08061v1
- Date: Thu, 16 Jun 2022 10:24:03 GMT
- Title: Active Nearest Neighbor Regression Through Delaunay Refinement
- Authors: Alexander Kravberg, Giovanni Luca Marchetti, Vladislav Polianskii,
Anastasiia Varava, Florian T. Pokorny, Danica Kragic
- Abstract summary: We introduce an algorithm for active function approximation based on nearest neighbor regression.
Our Active Nearest Neighbor Regressor (ANNR) relies on the Voronoi-Delaunay framework from computational geometry to subdivide the space into cells with constant estimated function value.
- Score: 79.93030583257597
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce an algorithm for active function approximation based on nearest
neighbor regression. Our Active Nearest Neighbor Regressor (ANNR) relies on the
Voronoi-Delaunay framework from computational geometry to subdivide the space
into cells with constant estimated function value and select novel query points
in a way that takes the geometry of the function graph into account. We
consider the recent state-of-the-art active function approximator called DEFER,
which is based on incremental rectangular partitioning of the space, as the
main baseline. The ANNR addresses a number of limitations that arise from the
space subdivision strategy used in DEFER. We provide a computationally
efficient implementation of our method, as well as theoretical halting
guarantees. Empirical results show that ANNR outperforms the baseline for both
closed-form functions and real-world examples, such as gravitational wave
parameter inference and exploration of the latent space of a generative model.
Related papers
- Mapping-to-Parameter Nonlinear Functional Regression with Novel B-spline
Free Knot Placement Algorithm [12.491024918270824]
We propose a novel approach to nonlinear functional regression.
The model is based on the mapping of function data from an infinite-dimensional function space to a finite-dimensional parameter space.
The performance of our knot placement algorithms is shown to be robust in both single-function approximation and multiple-function approximation contexts.
arXiv Detail & Related papers (2024-01-26T16:35:48Z) - Projecting basis functions with tensor networks for Gaussian process
regression [5.482420806459269]
We develop an approach that allows us to use an exponential amount of basis functions without the corresponding exponential computational complexity.
We project the resulting weights back to the original space to make GP predictions.
In an experiment with an 18-dimensional benchmark data set, we show the applicability of our method to an inverse dynamics problem.
arXiv Detail & Related papers (2023-10-31T16:59:07Z) - Kernel-based off-policy estimation without overlap: Instance optimality
beyond semiparametric efficiency [53.90687548731265]
We study optimal procedures for estimating a linear functional based on observational data.
For any convex and symmetric function class $mathcalF$, we derive a non-asymptotic local minimax bound on the mean-squared error.
arXiv Detail & Related papers (2023-01-16T02:57:37Z) - Statistical Optimality of Divide and Conquer Kernel-based Functional
Linear Regression [1.7227952883644062]
This paper studies the convergence performance of divide-and-conquer estimators in the scenario that the target function does not reside in the underlying kernel space.
As a decomposition-based scalable approach, the divide-and-conquer estimators of functional linear regression can substantially reduce the algorithmic complexities in time and memory.
arXiv Detail & Related papers (2022-11-20T12:29:06Z) - Structured Optimal Variational Inference for Dynamic Latent Space Models [16.531262817315696]
We consider a latent space model for dynamic networks, where our objective is to estimate the pairwise inner products plus the intercept of the latent positions.
To balance posterior inference and computational scalability, we consider a structured mean-field variational inference framework.
arXiv Detail & Related papers (2022-09-29T22:10:42Z) - Adaptive LASSO estimation for functional hidden dynamic geostatistical
model [69.10717733870575]
We propose a novel model selection algorithm based on a penalized maximum likelihood estimator (PMLE) for functional hiddenstatistical models (f-HD)
The algorithm is based on iterative optimisation and uses an adaptive least absolute shrinkage and selector operator (GMSOLAS) penalty function, wherein the weights are obtained by the unpenalised f-HD maximum-likelihood estimators.
arXiv Detail & Related papers (2022-08-10T19:17:45Z) - Space Partitioning and Regression Mode Seeking via a Mean-Shift-Inspired
Algorithm [5.990174495635326]
Mean shift (MS) algorithm is a nonparametric method used to cluster sample points and find the local modes of kernel density estimates.
We develop an algorithm to estimate the modes of regression functions and partition the sample points in the input space.
arXiv Detail & Related papers (2021-04-20T16:35:17Z) - Canny-VO: Visual Odometry with RGB-D Cameras based on Geometric 3D-2D
Edge Alignment [85.32080531133799]
This paper reviews the classical problem of free-form curve registration and applies it to an efficient RGBD visual odometry system called Canny-VO.
Two replacements for the distance transformation commonly used in edge registration are proposed: Approximate Nearest Neighbour Fields and Oriented Nearest Neighbour Fields.
3D2D edge alignment benefits from these alternative formulations in terms of both efficiency and accuracy.
arXiv Detail & Related papers (2020-12-15T11:42:17Z) - Reinforcement Learning with General Value Function Approximation:
Provably Efficient Approach via Bounded Eluder Dimension [124.7752517531109]
We establish a provably efficient reinforcement learning algorithm with general value function approximation.
We show that our algorithm achieves a regret bound of $widetildeO(mathrmpoly(dH)sqrtT)$ where $d$ is a complexity measure.
Our theory generalizes recent progress on RL with linear value function approximation and does not make explicit assumptions on the model of the environment.
arXiv Detail & Related papers (2020-05-21T17:36:09Z) - SLEIPNIR: Deterministic and Provably Accurate Feature Expansion for
Gaussian Process Regression with Derivatives [86.01677297601624]
We propose a novel approach for scaling GP regression with derivatives based on quadrature Fourier features.
We prove deterministic, non-asymptotic and exponentially fast decaying error bounds which apply for both the approximated kernel as well as the approximated posterior.
arXiv Detail & Related papers (2020-03-05T14:33:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.