Generalized Kernel-Based Dynamic Mode Decomposition
- URL: http://arxiv.org/abs/2002.04375v1
- Date: Tue, 11 Feb 2020 13:50:00 GMT
- Title: Generalized Kernel-Based Dynamic Mode Decomposition
- Authors: Patrick Heas, Cedric Herzet, Benoit Combes
- Abstract summary: We devise an algorithm based on low rank constraint optimization and kernel-based computation that generalizes a recent approach called " Kernel-based dynamic mode decomposition"
This new algorithm is characterized by a gain in approximation accuracy, as evidenced by numerical simulations, and in computational complexity.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reduced modeling in high-dimensional reproducing kernel Hilbert spaces offers
the opportunity to approximate efficiently non-linear dynamics. In this work,
we devise an algorithm based on low rank constraint optimization and
kernel-based computation that generalizes a recent approach called
"kernel-based dynamic mode decomposition". This new algorithm is characterized
by a gain in approximation accuracy, as evidenced by numerical simulations, and
in computational complexity.
Related papers
- Adaptive Error-Bounded Hierarchical Matrices for Efficient Neural Network Compression [0.0]
This paper introduces a dynamic, error-bounded hierarchical matrix (H-matrix) compression method tailored for Physics-Informed Neural Networks (PINNs)
The proposed approach reduces the computational complexity and memory demands of large-scale physics-based models while preserving the essential properties of the Neural Tangent Kernel (NTK)
Empirical results demonstrate that this technique outperforms traditional compression methods, such as Singular Value Decomposition (SVD), pruning, and quantization, by maintaining high accuracy and improving generalization capabilities.
arXiv Detail & Related papers (2024-09-11T05:55:51Z) - Asymptotic Dynamics of Alternating Minimization for Bilinear Regression [2.992602379681373]
We show that the dynamics of alternating minimization can be described effectively by a two-dimensional dependence in memory of evolution.
The theoretical framework developed in this work can be applied to analysis of various iterative algorithms, extending beyond the scope of alternating minimization.
arXiv Detail & Related papers (2024-02-07T11:09:10Z) - Stochastic Gradient Descent for Gaussian Processes Done Right [86.83678041846971]
We show that when emphdone right -- by which we mean using specific insights from optimisation and kernel communities -- gradient descent is highly effective.
We introduce a emphstochastic dual descent algorithm, explain its design in an intuitive manner and illustrate the design choices.
Our method places Gaussian process regression on par with state-of-the-art graph neural networks for molecular binding affinity prediction.
arXiv Detail & Related papers (2023-10-31T16:15:13Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Effective Hamiltonian approach to the exact dynamics of open system by complex discretization approximation for environment [0.0]
This paper proposes a noval generalization of the discretization approximation method in the complex plane using complex Gauss quadratures.
The effective Hamiltonian can be constructed by this way, which is non-Hermitian and demonstrates the complex energy modes with negative imaginary part.
arXiv Detail & Related papers (2023-03-12T05:34:29Z) - Matrix Reordering for Noisy Disordered Matrices: Optimality and
Computationally Efficient Algorithms [9.245687221460654]
Motivated by applications in single-cell biology and metagenomics, we investigate the problem of matrixing based on a noisy monotone Toeplitz matrix model.
We establish fundamental statistical limit for this problem in a decision-theoretic framework and demonstrate that a constrained least squares rate.
To address this, we propose a novel-time adaptive sorting algorithm with guaranteed performance improvement.
arXiv Detail & Related papers (2022-01-17T14:53:52Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method [64.15649345392822]
We introduce a framework for designing primal methods under the decentralized optimization setting where local functions are smooth and strongly convex.
Our approach consists of approximately solving a sequence of sub-problems induced by the accelerated augmented Lagrangian method.
When coupled with accelerated gradient descent, our framework yields a novel primal algorithm whose convergence rate is optimal and matched by recently derived lower bounds.
arXiv Detail & Related papers (2020-06-11T18:49:06Z) - On the Convergence of the Dynamic Inner PCA Algorithm [5.9931120596636935]
DiPCA is a powerful method for the analysis of time-dependent data.
We show that this is a specialized variant of a coordinate decomposition algorithm.
We compare the performance of the decomposition strategies with that of the off-the-shelf It.
arXiv Detail & Related papers (2020-03-12T17:50:34Z) - Interpolation Technique to Speed Up Gradients Propagation in Neural ODEs [71.26657499537366]
We propose a simple literature-based method for the efficient approximation of gradients in neural ODE models.
We compare it with the reverse dynamic method to train neural ODEs on classification, density estimation, and inference approximation tasks.
arXiv Detail & Related papers (2020-03-11T13:15:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.