A semi-supervised learning using over-parameterized regression
- URL: http://arxiv.org/abs/2409.04001v2
- Date: Tue, 19 Nov 2024 07:44:51 GMT
- Title: A semi-supervised learning using over-parameterized regression
- Authors: Katsuyuki Hagiwara,
- Abstract summary: Semi-supervised learning (SSL) is an important theme in machine learning.
In this paper, we consider a method of incorporating information on unlabeled samples into kernel functions.
- Score: 0.0
- License:
- Abstract: Semi-supervised learning (SSL) is an important theme in machine learning, in which we have a few labeled samples and many unlabeled samples. In this paper, for SSL in a regression problem, we consider a method of incorporating information on unlabeled samples into kernel functions. As a typical implementation, we employ Gaussian kernels whose centers are labeled and unlabeled input samples. Since the number of coefficients is larger than the number of labeled samples in this setting, this is an over-parameterized regression roblem. A ridge regression is a typical estimation method under this setting. In this paper, alternatively, we consider to apply the minimum norm least squares (MNLS), which is known as a helpful tool for understanding deep learning behavior while it may not be application oriented. Then, in applying the MNLS for SSL, we established several methods based on feature extraction/dimension reduction in the SVD (singular value decomposition) representation of a Gram type matrix appeared in the over-parameterized regression problem. The methods are thresholding according to singular value magnitude with cross validation, hard-thresholding with cross validation, universal thresholding and bridge thresholding methods. The first one is equivalent to a method using a well-known low rank approximation of a Gram type matrix. We refer to these methods as SVD regression methods. In the experiments for real data, depending on datasets, clear superiority of the proposed SVD regression methods over ridge regression methods was observed. And, depending on datasets, incorporation of information on unlabeled input samples into kernels was found to be clearly effective.
Related papers
- Pseudo-Labeling for Kernel Ridge Regression under Covariate Shift [1.3597551064547502]
We learn a regression function with small mean squared error over a target distribution, based on unlabeled data from there and labeled data that may have a different feature distribution.
We propose to split the labeled data into two subsets, and conduct kernel ridge regression on them separately to obtain a collection of candidate models and an imputation model.
Our estimator achieves the minimax optimal error rate up to a polylogarithmic factor, and we find that using pseudo-labels for model selection does not significantly hinder performance.
arXiv Detail & Related papers (2023-02-20T18:46:12Z) - Intra-class Adaptive Augmentation with Neighbor Correction for Deep
Metric Learning [99.14132861655223]
We propose a novel intra-class adaptive augmentation (IAA) framework for deep metric learning.
We reasonably estimate intra-class variations for every class and generate adaptive synthetic samples to support hard samples mining.
Our method significantly improves and outperforms the state-of-the-art methods on retrieval performances by 3%-6%.
arXiv Detail & Related papers (2022-11-29T14:52:38Z) - Adaptive Sketches for Robust Regression with Importance Sampling [64.75899469557272]
We introduce data structures for solving robust regression through gradient descent (SGD)
Our algorithm effectively runs $T$ steps of SGD with importance sampling while using sublinear space and just making a single pass over the data.
arXiv Detail & Related papers (2022-07-16T03:09:30Z) - Deep Metric Learning-Based Semi-Supervised Regression With Alternate
Learning [0.0]
This paper introduces a novel deep metric learning-based semi-supervised regression (DML-S2R) method for parameter estimation problems.
The proposed DML-S2R method aims to mitigate the problems of insufficient amount of labeled samples without collecting any additional samples with target values.
The experimental results confirm the success of DML-S2R compared to the state-of-the-art semi-supervised regression methods.
arXiv Detail & Related papers (2022-02-23T10:04:15Z) - Memory-Efficient Backpropagation through Large Linear Layers [107.20037639738433]
In modern neural networks like Transformers, linear layers require significant memory to store activations during backward pass.
This study proposes a memory reduction approach to perform backpropagation through linear layers.
arXiv Detail & Related papers (2022-01-31T13:02:41Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Attentional-Biased Stochastic Gradient Descent [74.49926199036481]
We present a provable method (named ABSGD) for addressing the data imbalance or label noise problem in deep learning.
Our method is a simple modification to momentum SGD where we assign an individual importance weight to each sample in the mini-batch.
ABSGD is flexible enough to combine with other robust losses without any additional cost.
arXiv Detail & Related papers (2020-12-13T03:41:52Z) - Nonlinear Distribution Regression for Remote Sensing Applications [6.664736150040092]
In many remote sensing applications one wants to estimate variables or parameters of interest from observations.
Standard algorithms such as neural networks, random forests or Gaussian processes are readily available to relate to the two.
This paper introduces a nonlinear (kernel-based) method for distribution regression that solves the previous problems without making any assumption on the statistics of the grouped data.
arXiv Detail & Related papers (2020-12-07T22:04:43Z) - Least Squares Regression with Markovian Data: Fundamental Limits and
Algorithms [69.45237691598774]
We study the problem of least squares linear regression where the data-points are dependent and are sampled from a Markov chain.
We establish sharp information theoretic minimax lower bounds for this problem in terms of $tau_mathsfmix$.
We propose an algorithm based on experience replay--a popular reinforcement learning technique--that achieves a significantly better error rate.
arXiv Detail & Related papers (2020-06-16T04:26:50Z) - Choosing the Sample with Lowest Loss makes SGD Robust [19.08973384659313]
We propose a simple variant of the simple gradient descent (SGD) method in each step.
Vanilla represents a new algorithm that is however effectively minimizing a non-current sum with the smallest loss.
Our theoretical analysis of this idea for ML problems is backed up with small-scale neural network experiments.
arXiv Detail & Related papers (2020-01-10T05:39:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.