Integrated utilization of equations and small dataset in the Koopman operator: applications to forward and inverse problems
- URL: http://arxiv.org/abs/2503.21048v2
- Date: Tue, 22 Apr 2025 23:41:36 GMT
- Title: Integrated utilization of equations and small dataset in the Koopman operator: applications to forward and inverse problems
- Authors: Ichiro Ohta, Shota Koyanagi, Kayo Kinjo, Jun Ohkubo,
- Abstract summary: This paper yields methods for incorporating ambiguous prior knowledge into the EDMD algorithm.<n>The ambiguous prior knowledge corresponds to the underlying time-evolution equations with unknown parameters.<n>We propose a scheme to apply the proposed method to inverse problems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, there has been a growing interest in data-driven approaches in physics, such as extended dynamic mode decomposition (EDMD). The EDMD algorithm focuses on nonlinear time-evolution systems, and the constructed Koopman matrix yields the next-time prediction with only linear matrix-product operations. Note that data-driven approaches generally require a large dataset. However, assume that one has some prior knowledge, even if it may be ambiguous. Then, one could achieve sufficient learning from only a small dataset by taking advantage of the prior knowledge. This paper yields methods for incorporating ambiguous prior knowledge into the EDMD algorithm. The ambiguous prior knowledge in this paper corresponds to the underlying time-evolution equations with unknown parameters. First, we apply the proposed method to forward problems, i.e., prediction tasks. Second, we propose a scheme to apply the proposed method to inverse problems, i.e., parameter estimation tasks. We demonstrate the learning with only a small dataset using guiding examples, i.e., the Duffing and the van der Pol systems.
Related papers
- Deep learning methods for inverse problems using connections between proximal operators and Hamilton-Jacobi equations [0.8749675983608171]
Inverse problems are important mathematical problems that seek to recover model parameters from noisy data.<n>We propose to leverage connections between proximal operators and neural partial differential equations (HJDEs) to develop novel deep learning architectures for learning the prior.
arXiv Detail & Related papers (2025-12-29T19:50:55Z) - Efficient Parametric SVD of Koopman Operator for Stochastic Dynamical Systems [51.54065545849027]
The Koopman operator provides a principled framework for analyzing nonlinear dynamical systems.<n>VAMPnet and DPNet have been proposed to learn the leading singular subspaces of the Koopman operator.<n>We propose a scalable and conceptually simple method for learning the top-$k$ singular functions of the Koopman operator.
arXiv Detail & Related papers (2025-07-09T18:55:48Z) - Attribute-to-Delete: Machine Unlearning via Datamodel Matching [65.13151619119782]
Machine unlearning -- efficiently removing a small "forget set" training data on a pre-divertrained machine learning model -- has recently attracted interest.
Recent research shows that machine unlearning techniques do not hold up in such a challenging setting.
arXiv Detail & Related papers (2024-10-30T17:20:10Z) - Informed Spectral Normalized Gaussian Processes for Trajectory Prediction [0.0]
We propose a novel regularization-based continual learning method for SNGPs.
Our proposal builds upon well-established methods and requires no rehearsal memory or parameter expansion.
We apply our informed SNGP model to the trajectory prediction problem in autonomous driving by integrating prior drivability knowledge.
arXiv Detail & Related papers (2024-03-18T17:05:24Z) - Large-Scale OD Matrix Estimation with A Deep Learning Method [70.78575952309023]
The proposed method integrates deep learning and numerical optimization algorithms to infer matrix structure and guide numerical optimization.
We conducted tests to demonstrate the good generalization performance of our method on a large-scale synthetic dataset.
arXiv Detail & Related papers (2023-10-09T14:30:06Z) - One-Pass Learning via Bridging Orthogonal Gradient Descent and Recursive
Least-Squares [8.443742714362521]
We develop an algorithm for one-pass learning which seeks to perfectly fit every new datapoint while changing the parameters in a direction that causes the least change to the predictions on previous datapoints.
Our algorithm uses the memory efficiently by exploiting the structure of the streaming data via an incremental principal component analysis (IPCA)
Our experiments show the effectiveness of the proposed method compared to the baselines.
arXiv Detail & Related papers (2022-07-28T02:01:31Z) - Generalization Bounds for Data-Driven Numerical Linear Algebra [24.961270871124217]
Data-driven algorithms can adapt their internal structure or parameters to inputs from unknown application-specific distributions, by learning from a training sample of inputs.
Several recent works have applied this approach to problems in numerical linear algebra, obtaining significant empirical gains in performance.
In this work we prove generalization bounds for those algorithms, within the PAC-learning framework for data-driven algorithm selection proposed by Gupta and Roughgarden.
arXiv Detail & Related papers (2022-06-16T02:23:45Z) - Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise
Learning [65.54757265434465]
Pairwise learning refers to learning tasks where the loss function depends on a pair instances.
Online descent (OGD) is a popular approach to handle streaming data in pairwise learning.
In this paper, we propose simple and online descent to methods for pairwise learning.
arXiv Detail & Related papers (2021-11-23T18:10:48Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Data-driven Weight Initialization with Sylvester Solvers [72.11163104763071]
We propose a data-driven scheme to initialize the parameters of a deep neural network.
We show that our proposed method is especially effective in few-shot and fine-tuning settings.
arXiv Detail & Related papers (2021-05-02T07:33:16Z) - EPEM: Efficient Parameter Estimation for Multiple Class Monotone Missing
Data [3.801859210248944]
We propose a novel algorithm to compute the maximum likelihood estimators (MLEs) of a multiple class, monotone missing dataset.
As the computation is exact, our EPEM algorithm does not require multiple iterations through the data as other imputation approaches.
arXiv Detail & Related papers (2020-09-23T20:07:53Z) - The data-driven physical-based equations discovery using evolutionary
approach [77.34726150561087]
We describe the algorithm for the mathematical equations discovery from the given observations data.
The algorithm combines genetic programming with the sparse regression.
It could be used for governing analytical equation discovery as well as for partial differential equations (PDE) discovery.
arXiv Detail & Related papers (2020-04-03T17:21:57Z) - Nonlinear Traffic Prediction as a Matrix Completion Problem with
Ensemble Learning [1.8352113484137629]
This paper addresses the problem of short-term traffic prediction for signalized traffic operations management.
We focus on predicting sensor states in high-resolution (second-by-second)
Our contributions can be summarized as offering three insights.
arXiv Detail & Related papers (2020-01-08T13:10:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.