Structure Parameter Optimized Kernel Based Online Prediction with a
Generalized Optimization Strategy for Nonstationary Time Series
- URL: http://arxiv.org/abs/2108.08180v1
- Date: Wed, 18 Aug 2021 14:46:31 GMT
- Title: Structure Parameter Optimized Kernel Based Online Prediction with a
Generalized Optimization Strategy for Nonstationary Time Series
- Authors: Jinhua Guo, Hao Chen, Jingxin Zhang and Sheng Chen
- Abstract summary: sparsification techniques aided online prediction algorithms in a reproducing kernel Hilbert space are studied.
Online prediction algorithms as usual consist of the selection of kernel structure parameters and the kernel weight vector updating.
A generalized optimization strategy is designed to construct the kernel dictionary sequentially in multiple kernel connection modes.
- Score: 14.110902170321348
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, sparsification techniques aided online prediction algorithms
in a reproducing kernel Hilbert space are studied for nonstationary time
series. The online prediction algorithms as usual consist of the selection of
kernel structure parameters and the kernel weight vector updating. For
structure parameters, the kernel dictionary is selected by some sparsification
techniques with online selective modeling criteria, and moreover the kernel
covariance matrix is intermittently optimized in the light of the covariance
matrix adaptation evolution strategy (CMA-ES). Optimizing the real symmetric
covariance matrix can not only improve the kernel structure's flexibility by
the cross relatedness of the input variables, but also partly alleviate the
prediction uncertainty caused by the kernel dictionary selection for
nonstationary time series. In order to sufficiently capture the underlying
dynamic characteristics in prediction-error time series, a generalized
optimization strategy is designed to construct the kernel dictionary
sequentially in multiple kernel connection modes. The generalized optimization
strategy provides a more self-contained way to construct the entire kernel
connections, which enhances the ability to adaptively track the changing
dynamic characteristics. Numerical simulations have demonstrated that the
proposed approach has superior prediction performance for nonstationary time
series.
Related papers
- Diagonal Over-parameterization in Reproducing Kernel Hilbert Spaces as an Adaptive Feature Model: Generalization and Adaptivity [11.644182973599788]
diagonal adaptive kernel model learns kernel eigenvalues and output coefficients simultaneously during training.
We show that the adaptivity comes from learning the right eigenvalues during training.
arXiv Detail & Related papers (2025-01-15T09:20:02Z) - Scalable Kernel Inverse Optimization [2.799896314754615]
Inverse optimization is a framework for learning the unknown objective function of an expert decision-maker from a past dataset.
We extend the hypothesis class of IO objective functions to a reproducing a kernel Hilbert space.
We show that a variant of the representer theorem holds for a specific training loss, allowing the reformulation of the problem as a finite-dimensional convex optimization program.
arXiv Detail & Related papers (2024-10-31T14:06:43Z) - Column and row subset selection using nuclear scores: algorithms and theory for Nyström approximation, CUR decomposition, and graph Laplacian reduction [0.0]
We develop unified methodologies for fast, efficient, and theoretically guaranteed column selection.
First, we derive and implement a sparsity-exploiting deterministic algorithm applicable to tasks including kernel approximation and CUR decomposition.
Next, we develop a matrix-free formalism relying on a randomization scheme satisfying guaranteed concentration bounds.
arXiv Detail & Related papers (2024-07-01T18:10:19Z) - Dynamic selection of p-norm in linear adaptive filtering via online
kernel-based reinforcement learning [8.319127681936815]
This study addresses the problem of selecting dynamically, at each time instance, the optimal'' p-norm to combat outliers in linear adaptive filtering.
Online and data-driven framework is designed via kernel-based reinforcement learning (KBRL)
arXiv Detail & Related papers (2022-10-20T14:49:39Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - Matrix Reordering for Noisy Disordered Matrices: Optimality and
Computationally Efficient Algorithms [9.245687221460654]
Motivated by applications in single-cell biology and metagenomics, we investigate the problem of matrixing based on a noisy monotone Toeplitz matrix model.
We establish fundamental statistical limit for this problem in a decision-theoretic framework and demonstrate that a constrained least squares rate.
To address this, we propose a novel-time adaptive sorting algorithm with guaranteed performance improvement.
arXiv Detail & Related papers (2022-01-17T14:53:52Z) - EBM-Fold: Fully-Differentiable Protein Folding Powered by Energy-based
Models [53.17320541056843]
We propose a fully-differentiable approach for protein structure optimization, guided by a data-driven generative network.
Our EBM-Fold approach can efficiently produce high-quality decoys, compared against traditional Rosetta-based structure optimization routines.
arXiv Detail & Related papers (2021-05-11T03:40:29Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Stochastic batch size for adaptive regularization in deep network
optimization [63.68104397173262]
We propose a first-order optimization algorithm incorporating adaptive regularization applicable to machine learning problems in deep learning framework.
We empirically demonstrate the effectiveness of our algorithm using an image classification task based on conventional network models applied to commonly used benchmark datasets.
arXiv Detail & Related papers (2020-04-14T07:54:53Z) - Optimization with Momentum: Dynamical, Control-Theoretic, and Symplectic
Perspectives [97.16266088683061]
The article rigorously establishes why symplectic discretization schemes are important for momentum-based optimization algorithms.
It provides a characterization of algorithms that exhibit accelerated convergence.
arXiv Detail & Related papers (2020-02-28T00:32:47Z) - Supervised Learning for Non-Sequential Data: A Canonical Polyadic
Decomposition Approach [85.12934750565971]
Efficient modelling of feature interactions underpins supervised learning for non-sequential tasks.
To alleviate this issue, it has been proposed to implicitly represent the model parameters as a tensor.
For enhanced expressiveness, we generalize the framework to allow feature mapping to arbitrarily high-dimensional feature vectors.
arXiv Detail & Related papers (2020-01-27T22:38:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.