General regularization in covariate shift adaptation
- URL: http://arxiv.org/abs/2307.11503v1
- Date: Fri, 21 Jul 2023 11:19:00 GMT
- Title: General regularization in covariate shift adaptation
- Authors: Duc Hoan Nguyen and Sergei V. Pereverzyev and Werner Zellinger
- Abstract summary: We show that the amount of samples needed to achieve the same order of accuracy as in the standard supervised learning without differences in data distributions is smaller than proven by state-of-the-art analyses.
- Score: 1.5469452301122175
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sample reweighting is one of the most widely used methods for correcting the
error of least squares learning algorithms in reproducing kernel Hilbert spaces
(RKHS), that is caused by future data distributions that are different from the
training data distribution. In practical situations, the sample weights are
determined by values of the estimated Radon-Nikod\'ym derivative, of the future
data distribution w.r.t.~the training data distribution. In this work, we
review known error bounds for reweighted kernel regression in RKHS and obtain,
by combination, novel results. We show under weak smoothness conditions, that
the amount of samples, needed to achieve the same order of accuracy as in the
standard supervised learning without differences in data distributions, is
smaller than proven by state-of-the-art analyses.
Related papers
- Effect of Random Learning Rate: Theoretical Analysis of SGD Dynamics in Non-Convex Optimization via Stationary Distribution [6.144680854063938]
We consider a variant of the gradient descent (SGD) with a random learning rate to reveal its convergence properties.
We demonstrate that a distribution of a parameter updated by Poisson SGD converges to a stationary distribution under weak assumptions.
arXiv Detail & Related papers (2024-06-23T06:52:33Z) - Symmetric Q-learning: Reducing Skewness of Bellman Error in Online
Reinforcement Learning [55.75959755058356]
In deep reinforcement learning, estimating the value function is essential to evaluate the quality of states and actions.
A recent study suggested that the error distribution for training the value function is often skewed because of the properties of the Bellman operator.
We proposed a method called Symmetric Q-learning, in which the synthetic noise generated from a zero-mean distribution is added to the target values to generate a Gaussian error distribution.
arXiv Detail & Related papers (2024-03-12T14:49:19Z) - Learning to Re-weight Examples with Optimal Transport for Imbalanced
Classification [74.62203971625173]
Imbalanced data pose challenges for deep learning based classification models.
One of the most widely-used approaches for tackling imbalanced data is re-weighting.
We propose a novel re-weighting method based on optimal transport (OT) from a distributional point of view.
arXiv Detail & Related papers (2022-08-05T01:23:54Z) - Reliable amortized variational inference with physics-based latent
distribution correction [0.4588028371034407]
A neural network is trained to approximate the posterior distribution over existing pairs of model and data.
The accuracy of this approach relies on the availability of high-fidelity training data.
We show that our correction step improves the robustness of amortized variational inference with respect to changes in number of source experiments, noise variance, and shifts in the prior distribution.
arXiv Detail & Related papers (2022-07-24T02:38:54Z) - Optimal regularizations for data generation with probabilistic graphical
models [0.0]
Empirically, well-chosen regularization schemes dramatically improve the quality of the inferred models.
We consider the particular case of L 2 and L 1 regularizations in the Maximum A Posteriori (MAP) inference of generative pairwise graphical models.
arXiv Detail & Related papers (2021-12-02T14:45:16Z) - Robust Correction of Sampling Bias Using Cumulative Distribution
Functions [19.551668880584973]
Varying domains and biased datasets can lead to differences between the training and the target distributions.
Current approaches for alleviating this often rely on estimating the ratio of training and target probability density functions.
arXiv Detail & Related papers (2020-10-23T22:13:00Z) - Addressing Variance Shrinkage in Variational Autoencoders using Quantile
Regression [0.0]
Probable Variational AutoEncoder (VAE) has become a popular model for anomaly detection in applications such as lesion detection in medical images.
We describe an alternative approach that avoids the well-known problem of shrinkage or underestimation of variance.
Using estimated quantiles to compute mean and variance under the Gaussian assumption, we compute reconstruction probability as a principled approach to outlier or anomaly detection.
arXiv Detail & Related papers (2020-10-18T17:37:39Z) - Distributional Reinforcement Learning via Moment Matching [54.16108052278444]
We formulate a method that learns a finite set of statistics from each return distribution via neural networks.
Our method can be interpreted as implicitly matching all orders of moments between a return distribution and its Bellman target.
Experiments on the suite of Atari games show that our method outperforms the standard distributional RL baselines.
arXiv Detail & Related papers (2020-07-24T05:18:17Z) - Calibration of Neural Networks using Splines [51.42640515410253]
Measuring calibration error amounts to comparing two empirical distributions.
We introduce a binning-free calibration measure inspired by the classical Kolmogorov-Smirnov (KS) statistical test.
Our method consistently outperforms existing methods on KS error as well as other commonly used calibration measures.
arXiv Detail & Related papers (2020-06-23T07:18:05Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z) - Balance-Subsampled Stable Prediction [55.13512328954456]
We propose a novel balance-subsampled stable prediction (BSSP) algorithm based on the theory of fractional factorial design.
A design-theoretic analysis shows that the proposed method can reduce the confounding effects among predictors induced by the distribution shift.
Numerical experiments on both synthetic and real-world data sets demonstrate that our BSSP algorithm significantly outperforms the baseline methods for stable prediction across unknown test data.
arXiv Detail & Related papers (2020-06-08T07:01:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.