DiffPD: Differentiable Projective Dynamics with Contact
- URL: http://arxiv.org/abs/2101.05917v1
- Date: Fri, 15 Jan 2021 00:13:33 GMT
- Title: DiffPD: Differentiable Projective Dynamics with Contact
- Authors: Tao Du, Kui Wu, Pingchuan Ma, Sebastien Wah, Andrew Spielberg, Daniela
Rus, Wojciech Matusik
- Abstract summary: We present DiffPD, an efficient differentiable soft-body simulator with implicit time integration.
We evaluate the performance of DiffPD and observe a speedup of 4-19 times compared to the standard Newton's method in various applications.
- Score: 65.88720481593118
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel, fast differentiable simulator for soft-body learning and
control applications. Existing differentiable soft-body simulators can be
classified into two categories based on their time integration methods.
Simulators using explicit time-stepping scheme require tiny time steps to avoid
numerical instabilities in gradient computation, and simulators using implicit
time integration typically compute gradients by employing the adjoint method to
solve the expensive linearized dynamics. Inspired by Projective Dynamics (PD),
we present DiffPD, an efficient differentiable soft-body simulator with
implicit time integration. The key idea in DiffPD is to speed up
backpropagation by exploiting the prefactorized Cholesky decomposition in PD to
achieve a super-linear convergence rate. To handle contacts, DiffPD solves
contact forces by analyzing a linear complementarity problem (LCP) and its
gradients. With the assumption that contacts occur on a small number of nodes,
we develop an efficient method for gradient computation by exploring the
low-rank structure in the linearized dynamics. We evaluate the performance of
DiffPD and observe a speedup of 4-19 times compared to the standard Newton's
method in various applications including system identification, inverse design
problems, trajectory optimization, and closed-loop control.
Related papers
- Efficient Interpretable Nonlinear Modeling for Multiple Time Series [5.448070998907116]
This paper proposes an efficient nonlinear modeling approach for multiple time series.
It incorporates nonlinear interactions among different time-series variables.
Experimental results show that the proposed algorithm improves the identification of the support of the VAR coefficients in a parsimonious manner.
arXiv Detail & Related papers (2023-09-29T11:42:59Z) - Improving Gradient Computation for Differentiable Physics Simulation
with Contacts [10.450509067356148]
We study differentiable rigid-body simulation with contacts.
We propose to improve gradient computation by continuous collision detection and leverage the time-of-impact (TOI)
We show that with TOI-Ve, we are able to learn an optimal control sequence that matches the analytical solution.
arXiv Detail & Related papers (2023-04-28T21:10:16Z) - Semi-supervised Learning of Partial Differential Operators and Dynamical
Flows [68.77595310155365]
We present a novel method that combines a hyper-network solver with a Fourier Neural Operator architecture.
We test our method on various time evolution PDEs, including nonlinear fluid flows in one, two, and three spatial dimensions.
The results show that the new method improves the learning accuracy at the time point of supervision point, and is able to interpolate and the solutions to any intermediate time.
arXiv Detail & Related papers (2022-07-28T19:59:14Z) - Closed-Form Diffeomorphic Transformations for Time Series Alignment [0.0]
We present a closed-form expression for the ODE solution and its gradient under continuous piecewise-affine velocity functions.
Results show significant improvements both in terms of efficiency and accuracy.
arXiv Detail & Related papers (2022-06-16T12:02:12Z) - A memory-efficient neural ODE framework based on high-level adjoint
differentiation [4.063868707697316]
We present a new neural ODE framework, PNODE, based on high-level discrete algorithmic differentiation.
We show that PNODE achieves the highest memory efficiency when compared with other reverse-accurate methods.
arXiv Detail & Related papers (2022-06-02T20:46:26Z) - Nesterov Accelerated ADMM for Fast Diffeomorphic Image Registration [63.15453821022452]
Recent developments in approaches based on deep learning have achieved sub-second runtimes for DiffIR.
We propose a simple iterative scheme that functionally composes intermediate non-stationary velocity fields.
We then propose a convex optimisation model that uses a regularisation term of arbitrary order to impose smoothness on these velocity fields.
arXiv Detail & Related papers (2021-09-26T19:56:45Z) - Efficient Differentiable Simulation of Articulated Bodies [89.64118042429287]
We present a method for efficient differentiable simulation of articulated bodies.
This enables integration of articulated body dynamics into deep learning frameworks.
We show that reinforcement learning with articulated systems can be accelerated using gradients provided by our method.
arXiv Detail & Related papers (2021-09-16T04:48:13Z) - Fast Gravitational Approach for Rigid Point Set Registration with
Ordinary Differential Equations [79.71184760864507]
This article introduces a new physics-based method for rigid point set alignment called Fast Gravitational Approach (FGA)
In FGA, the source and target point sets are interpreted as rigid particle swarms with masses interacting in a globally multiply-linked manner while moving in a simulated gravitational force field.
We show that the new method class has characteristics not found in previous alignment methods.
arXiv Detail & Related papers (2020-09-28T15:05:39Z) - Hierarchical Deep Learning of Multiscale Differential Equation
Time-Steppers [5.6385744392820465]
We develop a hierarchy of deep neural network time-steppers to approximate the flow map of the dynamical system over a disparate range of time-scales.
The resulting model is purely data-driven and leverages features of the multiscale dynamics.
We benchmark our algorithm against state-of-the-art methods, such as LSTM, reservoir computing, and clockwork RNN.
arXiv Detail & Related papers (2020-08-22T07:16:53Z) - Cogradient Descent for Bilinear Optimization [124.45816011848096]
We introduce a Cogradient Descent algorithm (CoGD) to address the bilinear problem.
We solve one variable by considering its coupling relationship with the other, leading to a synchronous gradient descent.
Our algorithm is applied to solve problems with one variable under the sparsity constraint.
arXiv Detail & Related papers (2020-06-16T13:41:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.