Path differentiability of ODE flows
- URL: http://arxiv.org/abs/2201.03819v1
- Date: Tue, 11 Jan 2022 07:56:33 GMT
- Title: Path differentiability of ODE flows
- Authors: Swann Marx (LS2N), Edouard Pauwels (IRIT)
- Abstract summary: We consider flows of ordinary differential equations driven by path differentiable vector fields.
Our main result states that such flows inherit the path differentiability property of the driving vector field.
We show indeed that forward propagation of derivatives given by the sensitivity differential inclusions provide a conservative Jacobian for the flow.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider flows of ordinary differential equations (ODEs) driven by path
differentiable vector fields. Path differentiable functions constitute a proper
subclass of Lipschitz functions which admit conservative gradients, a notion of
generalized derivative compatible with basic calculus rules. Our main result
states that such flows inherit the path differentiability property of the
driving vector field. We show indeed that forward propagation of derivatives
given by the sensitivity differential inclusions provide a conservative
Jacobian for the flow. This allows to propose a nonsmooth version of the
adjoint method, which can be applied to integral costs under an ODE constraint.
This result constitutes a theoretical ground to the application of small step
first order methods to solve a broad class of nonsmooth optimization problems
with parametrized ODE constraints. This is illustrated with the convergence of
small step first order methods based on the proposed nonsmooth adjoint.
Related papers
- Discrete Differential Principle for Continuous Smooth Function Representation [5.897186764108586]
Taylor's formula suffers from the curse of dimensionality and error propagation during derivative computation in discrete situations.<n>We propose a new discrete differential operator to estimate derivatives and to represent continuous smooth function locally.<n>Our technique offers broad applicability across domains such as vision representation, feature extraction, fluid mechanics, and cross-media imaging.
arXiv Detail & Related papers (2025-07-13T03:43:23Z) - Efficient Differentiable Approximation of Generalized Low-rank Regularization [64.73416824444328]
Low-rank regularization (LRR) has been widely applied in various machine learning tasks.<n>In this paper, we propose an efficient differentiable approximation of LRR.
arXiv Detail & Related papers (2025-05-21T11:49:17Z) - Learning second-order TVD flux limiters using differentiable solvers [2.4746157841644267]
This paper presents a data-driven framework for learning optimal second-order total variation diminishing (TVD) flux limiters via differentiable simulations.
In our fully differentiable finite volume solvers, the limiter functions are replaced by neural networks.
We show that a limiter trained solely on linear advection exhibits strong generalizability, surpassing the accuracy of most classical flux limiters.
arXiv Detail & Related papers (2025-03-11T01:19:39Z) - Variational formulation based on duality to solve partial differential equations: Use of B-splines and machine learning approximants [0.0]
Many partial differential equations (PDEs) do not have an exact, primal variational structure.
variational principle based on the dual (Lagrange multiplier) field was proposed.
We derive the dual weak form for the linear, one-dimensional, transient convection-diffusion equation.
arXiv Detail & Related papers (2024-12-02T07:53:47Z) - Generalizing Stochastic Smoothing for Differentiation and Gradient Estimation [59.86921150579892]
We deal with the problem of gradient estimation for differentiable relaxations of algorithms, operators, simulators, and other non-differentiable functions.
We develop variance reduction strategies for differentiable sorting and ranking, differentiable shortest-paths on graphs, differentiable rendering for pose estimation, as well as differentiable cryo-ET simulations.
arXiv Detail & Related papers (2024-10-10T17:10:00Z) - A Physics-Informed Machine Learning Approach for Solving Distributed Order Fractional Differential Equations [0.0]
This paper introduces a novel methodology for solving distributed-order fractional differential equations using a physics-informed machine learning framework.
By embedding the distributed-order functional equation into the SVR framework, we incorporate physical laws directly into the learning process.
The effectiveness of the proposed approach is validated through a series of numerical experiments on Caputo-based distributed-order fractional differential equations.
arXiv Detail & Related papers (2024-09-05T13:20:10Z) - Finite Operator Learning: Bridging Neural Operators and Numerical Methods for Efficient Parametric Solution and Optimization of PDEs [0.0]
We introduce a method that combines neural operators, physics-informed machine learning, and standard numerical methods for solving PDEs.
We can parametrically solve partial differential equations in a data-free manner and provide accurate sensitivities.
Our study focuses on the steady-state heat equation within heterogeneous materials.
arXiv Detail & Related papers (2024-07-04T21:23:12Z) - Adversarial flows: A gradient flow characterization of adversarial attacks [1.8749305679160366]
A popular method to perform adversarial attacks on neuronal networks is the so-called fast gradient sign method.
We show convergence of the discretization to the associated gradient flow.
arXiv Detail & Related papers (2024-06-08T07:05:26Z) - Revisiting Implicit Differentiation for Learning Problems in Optimal
Control [31.622109513774635]
This paper proposes a new method for differentiating through optimal trajectories arising from non- discrete, constrained discrete optimal control (COC) problems.
We show that the trajectory derivatives scale linearly with the number of timesteps and significantly improved scalability with model size.
arXiv Detail & Related papers (2023-10-23T00:51:24Z) - On the Identification and Optimization of Nonsmooth Superposition
Operators in Semilinear Elliptic PDEs [3.045851438458641]
We study an infinite-dimensional optimization problem that aims to identify the Nemytskii operator in the nonlinear part of a prototypical semilinear elliptic partial differential equation (PDE)
In contrast to previous works, we consider this identification problem in a low-regularity regime in which the function inducing the Nemytskii operator is a-priori only known to be an element of $H leakyloc(mathbbR)$.
arXiv Detail & Related papers (2023-06-08T13:33:20Z) - Learning Discretized Neural Networks under Ricci Flow [51.36292559262042]
We study Discretized Neural Networks (DNNs) composed of low-precision weights and activations.
DNNs suffer from either infinite or zero gradients due to the non-differentiable discrete function during training.
arXiv Detail & Related papers (2023-02-07T10:51:53Z) - Last-Iterate Convergence of Saddle-Point Optimizers via High-Resolution
Differential Equations [83.3201889218775]
Several widely-used first-order saddle-point optimization methods yield an identical continuous-time ordinary differential equation (ODE) when derived naively.
However, the convergence properties of these methods are qualitatively different, even on simple bilinear games.
We adopt a framework studied in fluid dynamics to design differential equation models for several saddle-point optimization methods.
arXiv Detail & Related papers (2021-12-27T18:31:34Z) - Deep Learning Approximation of Diffeomorphisms via Linear-Control
Systems [91.3755431537592]
We consider a control system of the form $dot x = sum_i=1lF_i(x)u_i$, with linear dependence in the controls.
We use the corresponding flow to approximate the action of a diffeomorphism on a compact ensemble of points.
arXiv Detail & Related papers (2021-10-24T08:57:46Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - Cogradient Descent for Bilinear Optimization [124.45816011848096]
We introduce a Cogradient Descent algorithm (CoGD) to address the bilinear problem.
We solve one variable by considering its coupling relationship with the other, leading to a synchronous gradient descent.
Our algorithm is applied to solve problems with one variable under the sparsity constraint.
arXiv Detail & Related papers (2020-06-16T13:41:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.