Twin Neural Network Regression is a Semi-Supervised Regression Algorithm
- URL: http://arxiv.org/abs/2106.06124v1
- Date: Fri, 11 Jun 2021 02:10:52 GMT
- Title: Twin Neural Network Regression is a Semi-Supervised Regression Algorithm
- Authors: Sebastian J. Wetzel, Roger G. Melko, Isaac Tamblyn
- Abstract summary: Twin neural network regression (TNNR) is a semi-supervised regression algorithm.
TNNR is trained to predict differences between the target values of two different data points rather than the targets themselves.
- Score: 0.90238471756546
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Twin neural network regression (TNNR) is a semi-supervised regression
algorithm, it can be trained on unlabelled data points as long as other,
labelled anchor data points, are present. TNNR is trained to predict
differences between the target values of two different data points rather than
the targets themselves. By ensembling predicted differences between the targets
of an unseen data point and all training data points, it is possible to obtain
a very accurate prediction for the original regression problem. Since any loop
of predicted differences should sum to zero, loops can be supplied to the
training data, even if the data points themselves within loops are unlabelled.
Semi-supervised training improves TNNR performance, which is already state of
the art, significantly.
Related papers
- Just How Flexible are Neural Networks in Practice? [89.80474583606242]
It is widely believed that a neural network can fit a training set containing at least as many samples as it has parameters.
In practice, however, we only find solutions via our training procedure, including the gradient and regularizers, limiting flexibility.
arXiv Detail & Related papers (2024-06-17T12:24:45Z) - Twin Neural Network Improved k-Nearest Neighbor Regression [0.0]
Twin neural network regression is trained to predict differences between regression targets rather than the targets themselves.
A solution to the original regression problem can be obtained by ensembling predicted differences between the targets of an unknown data point and multiple known anchor data points.
arXiv Detail & Related papers (2023-10-01T13:20:49Z) - A step towards understanding why classification helps regression [16.741816961905947]
We show that the effect of adding a classification loss is the most pronounced for regression with imbalanced data.
For a regression task, if the data sampling is imbalanced, then add a classification loss.
arXiv Detail & Related papers (2023-08-21T10:00:46Z) - Evaluating Graph Neural Networks for Link Prediction: Current Pitfalls
and New Benchmarking [66.83273589348758]
Link prediction attempts to predict whether an unseen edge exists based on only a portion of edges of a graph.
A flurry of methods have been introduced in recent years that attempt to make use of graph neural networks (GNNs) for this task.
New and diverse datasets have also been created to better evaluate the effectiveness of these new models.
arXiv Detail & Related papers (2023-06-18T01:58:59Z) - How to get the most out of Twinned Regression Methods [0.0]
Twinned regression methods are designed to solve the dual problem to the original regression problem.
A solution to the original regression problem can be obtained by ensembling predicted differences between the targets of an unknown data point and multiple known anchor data points.
arXiv Detail & Related papers (2023-01-03T22:37:44Z) - Boosted Dynamic Neural Networks [53.559833501288146]
A typical EDNN has multiple prediction heads at different layers of the network backbone.
To optimize the model, these prediction heads together with the network backbone are trained on every batch of training data.
Treating training and testing inputs differently at the two phases will cause the mismatch between training and testing data distributions.
We formulate an EDNN as an additive model inspired by gradient boosting, and propose multiple training techniques to optimize the model effectively.
arXiv Detail & Related papers (2022-11-30T04:23:12Z) - Variation-Incentive Loss Re-weighting for Regression Analysis on Biased
Data [8.115323786541078]
We aim to improve the accuracy of the regression analysis by addressing the data skewness/bias during model training.
We propose a Variation-Incentive Loss re-weighting method (VILoss) to optimize the gradient descent-based model training for regression analysis.
arXiv Detail & Related papers (2021-09-14T10:22:21Z) - Shift-Robust GNNs: Overcoming the Limitations of Localized Graph
Training data [52.771780951404565]
Shift-Robust GNN (SR-GNN) is designed to account for distributional differences between biased training data and the graph's true inference distribution.
We show that SR-GNN outperforms other GNN baselines by accuracy, eliminating at least (40%) of the negative effects introduced by biased training data.
arXiv Detail & Related papers (2021-08-02T18:00:38Z) - Twin Neural Network Regression [0.802904964931021]
We introduce twin neural network (TNN) regression.
This method predicts differences between the target values of two different data points rather than the targets themselves.
We show that TNNs are able to compete or yield more accurate predictions for different data sets, compared to other state-of-the-art methods.
arXiv Detail & Related papers (2020-12-29T17:52:31Z) - Deep learning for gravitational-wave data analysis: A resampling
white-box approach [62.997667081978825]
We apply Convolutional Neural Networks (CNNs) to detect gravitational wave (GW) signals of compact binary coalescences, using single-interferometer data from LIGO detectors.
CNNs were quite precise to detect noise but not sensitive enough to recall GW signals, meaning that CNNs are better for noise reduction than generation of GW triggers.
arXiv Detail & Related papers (2020-09-09T03:28:57Z) - RIFLE: Backpropagation in Depth for Deep Transfer Learning through
Re-Initializing the Fully-connected LayEr [60.07531696857743]
Fine-tuning the deep convolution neural network(CNN) using a pre-trained model helps transfer knowledge learned from larger datasets to the target task.
We propose RIFLE - a strategy that deepens backpropagation in transfer learning settings.
RIFLE brings meaningful updates to the weights of deep CNN layers and improves low-level feature learning.
arXiv Detail & Related papers (2020-07-07T11:27:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.