Learning of Population Dynamics: Inverse Optimization Meets JKO Scheme
- URL: http://arxiv.org/abs/2506.01502v1
- Date: Mon, 02 Jun 2025 10:08:03 GMT
- Title: Learning of Population Dynamics: Inverse Optimization Meets JKO Scheme
- Authors: Mikhail Persiianov, Jiawei Chen, Petr Mokrov, Alexander Tyurin, Evgeny Burnaev, Alexander Korotin,
- Abstract summary: Learning population dynamics involves recovering the underlying process that governs particle evolution.<n>Recent methods frame this as an energy problem in probability space and leverage the celebrated JKO scheme for efficient time discretization.
- Score: 98.04239223945508
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning population dynamics involves recovering the underlying process that governs particle evolution, given evolutionary snapshots of samples at discrete time points. Recent methods frame this as an energy minimization problem in probability space and leverage the celebrated JKO scheme for efficient time discretization. In this work, we introduce $\texttt{iJKOnet}$, an approach that combines the JKO framework with inverse optimization techniques to learn population dynamics. Our method relies on a conventional $\textit{end-to-end}$ adversarial training procedure and does not require restrictive architectural choices, e.g., input-convex neural networks. We establish theoretical guarantees for our methodology and demonstrate improved performance over prior JKO-based methods.
Related papers
- An Augmented Backward-Corrected Projector Splitting Integrator for Dynamical Low-Rank Training [47.69709732622765]
We introduce a novel low-rank training method that reduces the number of required QR decompositions.<n>Our approach integrates an augmentation step into a projector-splitting scheme, ensuring convergence to a locally optimal solution.
arXiv Detail & Related papers (2025-02-05T09:03:50Z) - Deep Reinforcement Learning for Online Optimal Execution Strategies [49.1574468325115]
This paper tackles the challenge of learning non-Markovian optimal execution strategies in dynamic financial markets.
We introduce a novel actor-critic algorithm based on Deep Deterministic Policy Gradient (DDPG)
We show that our algorithm successfully approximates the optimal execution strategy.
arXiv Detail & Related papers (2024-10-17T12:38:08Z) - An Efficient On-Policy Deep Learning Framework for Stochastic Optimal Control [14.832859803172846]
We present a novel on-policy algorithm for solving optimal control (SOC) problems.<n>By leveraging the Girsanov theorem, our method directly computes on-policy gradients of the SOC objective without expensive backpropagation through differential equations or adjoint problem solutions.<n> Experimental results demonstrate substantial improvements in both computational speed and memory efficiency compared to existing approaches.
arXiv Detail & Related papers (2024-10-07T16:16:53Z) - Learning diffusion at lightspeed [4.154846138501937]
Existing models to learn the diffusion terms from observational data rely on complex bilevel optimization problems.
We propose a new simple model, JKOnet*, which bypasses the complexity of existing architectures.
JKOnet* minimizes a simple quadratic loss and outperforms other baselines in terms of sample efficiency, computational complexity, and accuracy.
arXiv Detail & Related papers (2024-06-18T13:44:07Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - Guaranteed Conservation of Momentum for Learning Particle-based Fluid
Dynamics [96.9177297872723]
We present a novel method for guaranteeing linear momentum in learned physics simulations.
We enforce conservation of momentum with a hard constraint, which we realize via antisymmetrical continuous convolutional layers.
In combination, the proposed method allows us to increase the physical accuracy of the learned simulator substantially.
arXiv Detail & Related papers (2022-10-12T09:12:59Z) - Distributed Evolution Strategies for Black-box Stochastic Optimization [42.90600124972943]
This work concerns the evolutionary approaches to distributed black-box optimization.
Each worker can individually solve an approximation of the problem with algorithms.
We propose two alternative simulation schemes which significantly improve robustness of problems.
arXiv Detail & Related papers (2022-04-09T11:18:41Z) - Reinforcement Learning with Fast Stabilization in Linear Dynamical
Systems [91.43582419264763]
We study model-based reinforcement learning (RL) in unknown stabilizable linear dynamical systems.
We propose an algorithm that certifies fast stabilization of the underlying system by effectively exploring the environment.
We show that the proposed algorithm attains $tildemathcalO(sqrtT)$ regret after $T$ time steps of agent-environment interaction.
arXiv Detail & Related papers (2020-07-23T23:06:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.