Twin Neural Network Improved k-Nearest Neighbor Regression
- URL: http://arxiv.org/abs/2310.00664v1
- Date: Sun, 1 Oct 2023 13:20:49 GMT
- Title: Twin Neural Network Improved k-Nearest Neighbor Regression
- Authors: Sebastian J. Wetzel
- Abstract summary: Twin neural network regression is trained to predict differences between regression targets rather than the targets themselves.
A solution to the original regression problem can be obtained by ensembling predicted differences between the targets of an unknown data point and multiple known anchor data points.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Twin neural network regression is trained to predict differences between
regression targets rather than the targets themselves. A solution to the
original regression problem can be obtained by ensembling predicted differences
between the targets of an unknown data point and multiple known anchor data
points. Choosing the anchors to be the nearest neighbors of the unknown data
point leads to a neural network-based improvement of k-nearest neighbor
regression. This algorithm is shown to outperform both neural networks and
k-nearest neighbor regression on small to medium-sized data sets.
Related papers
- SIRST-5K: Exploring Massive Negatives Synthesis with Self-supervised
Learning for Robust Infrared Small Target Detection [53.19618419772467]
Single-frame infrared small target (SIRST) detection aims to recognize small targets from clutter backgrounds.
With the development of Transformer, the scale of SIRST models is constantly increasing.
With a rich diversity of infrared small target data, our algorithm significantly improves the model performance and convergence speed.
arXiv Detail & Related papers (2024-03-08T16:14:54Z) - Implicit Bias of Gradient Descent for Two-layer ReLU and Leaky ReLU
Networks on Nearly-orthogonal Data [66.1211659120882]
The implicit bias towards solutions with favorable properties is believed to be a key reason why neural networks trained by gradient-based optimization can generalize well.
While the implicit bias of gradient flow has been widely studied for homogeneous neural networks (including ReLU and leaky ReLU networks), the implicit bias of gradient descent is currently only understood for smooth neural networks.
arXiv Detail & Related papers (2023-10-29T08:47:48Z) - How to get the most out of Twinned Regression Methods [0.0]
Twinned regression methods are designed to solve the dual problem to the original regression problem.
A solution to the original regression problem can be obtained by ensembling predicted differences between the targets of an unknown data point and multiple known anchor data points.
arXiv Detail & Related papers (2023-01-03T22:37:44Z) - Improved Convergence Guarantees for Shallow Neural Networks [91.3755431537592]
We prove convergence of depth 2 neural networks, trained via gradient descent, to a global minimum.
Our model has the following features: regression with quadratic loss function, fully connected feedforward architecture, RelU activations, Gaussian data instances, adversarial labels.
They strongly suggest that, at least in our model, the convergence phenomenon extends well beyond the NTK regime''
arXiv Detail & Related papers (2022-12-05T14:47:52Z) - Implicit Bias in Leaky ReLU Networks Trained on High-Dimensional Data [63.34506218832164]
In this work, we investigate the implicit bias of gradient flow and gradient descent in two-layer fully-connected neural networks with ReLU activations.
For gradient flow, we leverage recent work on the implicit bias for homogeneous neural networks to show that leakyally, gradient flow produces a neural network with rank at most two.
For gradient descent, provided the random variance is small enough, we show that a single step of gradient descent suffices to drastically reduce the rank of the network, and that the rank remains small throughout training.
arXiv Detail & Related papers (2022-10-13T15:09:54Z) - Twin Neural Network Regression is a Semi-Supervised Regression Algorithm [0.90238471756546]
Twin neural network regression (TNNR) is a semi-supervised regression algorithm.
TNNR is trained to predict differences between the target values of two different data points rather than the targets themselves.
arXiv Detail & Related papers (2021-06-11T02:10:52Z) - Towards an Understanding of Benign Overfitting in Neural Networks [104.2956323934544]
Modern machine learning models often employ a huge number of parameters and are typically optimized to have zero training loss.
We examine how these benign overfitting phenomena occur in a two-layer neural network setting.
We show that it is possible for the two-layer ReLU network interpolator to achieve a near minimax-optimal learning rate.
arXiv Detail & Related papers (2021-06-06T19:08:53Z) - Finding hidden-feature depending laws inside a data set and classifying
it using Neural Network [0.0]
The logcosh loss function for neural networks has been developed to combine the advantage of the absolute error loss function of not overweighting outliers with the advantage of the mean square error of continuous derivative near the mean.
This work suggests a method that uses artificial neural networks with logcosh loss to find the branches of set-valued mappings in parameter-outcome sample sets and classifies the samples according to those branches.
arXiv Detail & Related papers (2021-01-25T21:37:37Z) - Twin Neural Network Regression [0.802904964931021]
We introduce twin neural network (TNN) regression.
This method predicts differences between the target values of two different data points rather than the targets themselves.
We show that TNNs are able to compete or yield more accurate predictions for different data sets, compared to other state-of-the-art methods.
arXiv Detail & Related papers (2020-12-29T17:52:31Z) - Measurement error models: from nonparametric methods to deep neural
networks [3.1798318618973362]
We propose an efficient neural network design for estimating measurement error models.
We use a fully connected feed-forward neural network to approximate the regression function $f(x)$.
We conduct an extensive numerical study to compare the neural network approach with classical nonparametric methods.
arXiv Detail & Related papers (2020-07-15T06:05:37Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.