Continuous Learned Primal Dual
- URL: http://arxiv.org/abs/2405.02478v1
- Date: Fri, 3 May 2024 20:40:14 GMT
- Title: Continuous Learned Primal Dual
- Authors: Christina Runkel, Ander Biguri, Carola-Bibiane Schönlieb,
- Abstract summary: We propose the idea that a sequence of layers in a neural network is just a discretisation of an ODE, and thus can be directly modelled by a parameterised ODE.
In this work, we explore the use of Neural ODEs for learned inverse problems, in particular with the well-known Learned Primal Dual algorithm, and apply it to computed tomography (CT) reconstruction.
- Score: 10.111901389604423
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural ordinary differential equations (Neural ODEs) propose the idea that a sequence of layers in a neural network is just a discretisation of an ODE, and thus can instead be directly modelled by a parameterised ODE. This idea has had resounding success in the deep learning literature, with direct or indirect influence in many state of the art ideas, such as diffusion models or time dependant models. Recently, a continuous version of the U-net architecture has been proposed, showing increased performance over its discrete counterpart in many imaging applications and wrapped with theoretical guarantees around its performance and robustness. In this work, we explore the use of Neural ODEs for learned inverse problems, in particular with the well-known Learned Primal Dual algorithm, and apply it to computed tomography (CT) reconstruction.
Related papers
- Neural Delay Differential Equations: System Reconstruction and Image
Classification [14.59919398960571]
We propose a new class of continuous-depth neural networks with delay, named Neural Delay Differential Equations (NDDEs)
Compared to NODEs, NDDEs have a stronger capacity of nonlinear representations.
We achieve lower loss and higher accuracy not only for the data produced synthetically but also for the CIFAR10, a well-known image dataset.
arXiv Detail & Related papers (2023-04-11T16:09:28Z) - Sparse Flows: Pruning Continuous-depth Models [107.98191032466544]
We show that pruning improves generalization for neural ODEs in generative modeling.
We also show that pruning finds minimal and efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy.
arXiv Detail & Related papers (2021-06-24T01:40:17Z) - Accelerating Neural ODEs Using Model Order Reduction [0.0]
We show that mathematical model order reduction methods can be used for compressing and accelerating Neural ODEs.
We implement our novel compression method by developing Neural ODEs that integrate the necessary subspace-projection and operations as layers of the neural network.
arXiv Detail & Related papers (2021-05-28T19:27:09Z) - Neural Delay Differential Equations [9.077775405204347]
We propose a new class of continuous-depth neural networks with delay, named as Neural Delay Differential Equations (NDDEs)
For computing the corresponding gradients, we use the adjoint sensitivity method to obtain the delayed dynamics of the adjoint.
Our results reveal that appropriately articulating the elements of dynamical systems into the network design is truly beneficial to promoting the network performance.
arXiv Detail & Related papers (2021-02-22T06:53:51Z) - DyNODE: Neural Ordinary Differential Equations for Dynamics Modeling in
Continuous Control [0.0]
We present a novel approach that captures the underlying dynamics of a system by incorporating control in a neural ordinary differential equation framework.
Results indicate that a simple DyNODE architecture when combined with an actor-critic reinforcement learning algorithm outperforms canonical neural networks.
arXiv Detail & Related papers (2020-09-09T12:56:58Z) - Deep Variational Network Toward Blind Image Restoration [60.45350399661175]
Blind image restoration is a common yet challenging problem in computer vision.
We propose a novel blind image restoration method, aiming to integrate both the advantages of them.
Experiments on two typical blind IR tasks, namely image denoising and super-resolution, demonstrate that the proposed method achieves superior performance over current state-of-the-arts.
arXiv Detail & Related papers (2020-08-25T03:30:53Z) - Hypersolvers: Toward Fast Continuous-Depth Models [16.43439140464003]
We introduce hypersolvers, neural networks designed to solve ODEs with low overhead and theoretical guarantees on accuracy.
The synergistic combination of hypersolvers and Neural ODEs allows for cheap inference and unlocks a new frontier for practical application of continuous-depth models.
arXiv Detail & Related papers (2020-07-19T06:31:31Z) - STEER: Simple Temporal Regularization For Neural ODEs [80.80350769936383]
We propose a new regularization technique: randomly sampling the end time of the ODE during training.
The proposed regularization is simple to implement, has negligible overhead and is effective across a wide variety of tasks.
We show through experiments on normalizing flows, time series models and image recognition that the proposed regularization can significantly decrease training time and even improve performance over baseline models.
arXiv Detail & Related papers (2020-06-18T17:44:50Z) - Time Dependence in Non-Autonomous Neural ODEs [74.78386661760662]
We propose a novel family of Neural ODEs with time-varying weights.
We outperform previous Neural ODE variants in both speed and representational capacity.
arXiv Detail & Related papers (2020-05-05T01:41:46Z) - Interpolation Technique to Speed Up Gradients Propagation in Neural ODEs [71.26657499537366]
We propose a simple literature-based method for the efficient approximation of gradients in neural ODE models.
We compare it with the reverse dynamic method to train neural ODEs on classification, density estimation, and inference approximation tasks.
arXiv Detail & Related papers (2020-03-11T13:15:57Z) - Stochasticity in Neural ODEs: An Empirical Study [68.8204255655161]
Regularization of neural networks (e.g. dropout) is a widespread technique in deep learning that allows for better generalization.
We show that data augmentation during the training improves the performance of both deterministic and versions of the same model.
However, the improvements obtained by the data augmentation completely eliminate the empirical regularization gains, making the performance of neural ODE and neural SDE negligible.
arXiv Detail & Related papers (2020-02-22T22:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.