Online Kernel Sliced Inverse Regression
- URL: http://arxiv.org/abs/2301.09516v1
- Date: Mon, 23 Jan 2023 16:05:51 GMT
- Title: Online Kernel Sliced Inverse Regression
- Authors: Wenquan Cui, Yue Zhao, Jianjun Xu, Haoyang Cheng
- Abstract summary: Online dimension reduction is a common method for high-dimensional streaming data processing.
In this article, an online kernel sliced inverse regression method is proposed.
We transform the problem into an online generalized eigendecomposition problem, and use the optimization method to update the centered dimension reduction directions.
- Score: 4.561305216067566
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online dimension reduction is a common method for high-dimensional streaming
data processing. Online principal component analysis, online sliced inverse
regression, online kernel principal component analysis and other methods have
been studied in depth, but as far as we know, online supervised nonlinear
dimension reduction methods have not been fully studied. In this article, an
online kernel sliced inverse regression method is proposed. By introducing the
approximate linear dependence condition and dictionary variable sets, we
address the problem of increasing variable dimensions with the sample size in
the online kernel sliced inverse regression method, and propose a reduced-order
method for updating variables online. We then transform the problem into an
online generalized eigen-decomposition problem, and use the stochastic
optimization method to update the centered dimension reduction directions.
Simulations and the real data analysis show that our method can achieve close
performance to batch processing kernel sliced inverse regression.
Related papers
- Adaptive debiased SGD in high-dimensional GLMs with streaming data [4.704144189806667]
We introduce a novel approach to online inference in high-dimensional generalized linear models.
Our method operates in a single-pass mode, significantly reducing both time and space complexity.
We demonstrate that our method, termed the Approximated Debiased Lasso (ADL), not only mitigates the need for the bounded individual probability condition but also significantly improves numerical performance.
arXiv Detail & Related papers (2024-05-28T15:36:48Z) - Stochastic Gradient Descent for Gaussian Processes Done Right [86.83678041846971]
We show that when emphdone right -- by which we mean using specific insights from optimisation and kernel communities -- gradient descent is highly effective.
We introduce a emphstochastic dual descent algorithm, explain its design in an intuitive manner and illustrate the design choices.
Our method places Gaussian process regression on par with state-of-the-art graph neural networks for molecular binding affinity prediction.
arXiv Detail & Related papers (2023-10-31T16:15:13Z) - Iterative Sketching for Secure Coded Regression [66.53950020718021]
We propose methods for speeding up distributed linear regression.
Specifically, we randomly rotate the basis of the system of equations and then subsample blocks, to simultaneously secure the information and reduce the dimension of the regression problem.
arXiv Detail & Related papers (2023-08-08T11:10:42Z) - Low-rank extended Kalman filtering for online learning of neural
networks from streaming data [71.97861600347959]
We propose an efficient online approximate Bayesian inference algorithm for estimating the parameters of a nonlinear function from a potentially non-stationary data stream.
The method is based on the extended Kalman filter (EKF), but uses a novel low-rank plus diagonal decomposition of the posterior matrix.
In contrast to methods based on variational inference, our method is fully deterministic, and does not require step-size tuning.
arXiv Detail & Related papers (2023-05-31T03:48:49Z) - Federated Sufficient Dimension Reduction Through High-Dimensional Sparse
Sliced Inverse Regression [4.561305216067566]
Federated learning has become a popular tool in the big data era nowadays.
We propose a federated sparse sliced inverse regression algorithm for the first time.
arXiv Detail & Related papers (2023-01-23T15:53:06Z) - Vector-Valued Least-Squares Regression under Output Regularity
Assumptions [73.99064151691597]
We propose and analyse a reduced-rank method for solving least-squares regression problems with infinite dimensional output.
We derive learning bounds for our method, and study under which setting statistical performance is improved in comparison to full-rank method.
arXiv Detail & Related papers (2022-11-16T15:07:00Z) - Robust online joint state/input/parameter estimation of linear systems [0.0]
This paper presents a method for jointly estimating the state, input, and parameters of linear systems in an online fashion.
The method is specially designed for measurements that are corrupted with non-Gaussian noise or outliers.
arXiv Detail & Related papers (2022-04-12T09:41:28Z) - Memory-Efficient Backpropagation through Large Linear Layers [107.20037639738433]
In modern neural networks like Transformers, linear layers require significant memory to store activations during backward pass.
This study proposes a memory reduction approach to perform backpropagation through linear layers.
arXiv Detail & Related papers (2022-01-31T13:02:41Z) - Sufficient Dimension Reduction for High-Dimensional Regression and
Low-Dimensional Embedding: Tutorial and Survey [5.967999555890417]
This is a tutorial and survey paper on various methods for Sufficient Dimension Reduction (SDR)
We cover these methods with both statistical high-dimensional regression perspective and machine learning approach for dimensionality reduction.
arXiv Detail & Related papers (2021-10-18T21:05:08Z) - Fast and Robust Online Inference with Stochastic Gradient Descent via
Random Scaling [0.9806910643086042]
We develop a new method of online inference for a vector of parameters estimated by the Polyak-Rtupper averaging procedure of gradient descent algorithms.
Our approach is fully operational with online data and is rigorously underpinned by a functional central limit theorem.
arXiv Detail & Related papers (2021-06-06T15:38:37Z) - A Hypergradient Approach to Robust Regression without Correspondence [85.49775273716503]
We consider a variant of regression problem, where the correspondence between input and output data is not available.
Most existing methods are only applicable when the sample size is small.
We propose a new computational framework -- ROBOT -- for the shuffled regression problem.
arXiv Detail & Related papers (2020-11-30T21:47:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.