Stochastic Primal-Dual Deep Unrolling Networks for Imaging Inverse
Problems
- URL: http://arxiv.org/abs/2110.10093v1
- Date: Tue, 19 Oct 2021 16:46:03 GMT
- Title: Stochastic Primal-Dual Deep Unrolling Networks for Imaging Inverse
Problems
- Authors: Junqi Tang
- Abstract summary: We present a new type of efficient deep-unrolling networks for solving imaging inverse problems.
In our unrolling network, we only use a subset of the forward and adjoint operator.
Our numerical results demonstrate the effectiveness of our approach in X-ray CT imaging task.
- Score: 3.7819322027528113
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work we present a new type of efficient deep-unrolling networks for
solving imaging inverse problems. Classical deep-unrolling methods require full
forward operator and its adjoint across each layer, and hence can be
computationally more expensive than other end-to-end methods such as
FBP-ConvNet, especially in 3D image reconstruction tasks. We propose a
stochastic (ordered-subsets) extension of the Learned Primal-Dual (LPD) which
is the state-of-the-art unrolling network. In our unrolling network, we only
use a subset of the forward and adjoint operator, to achieve computational
efficiency. We consider 3 ways of training the proposed network to cope with
different scenarios of the availability of the training data, including (1)
supervised training on paired data, (2) unsupervised adversarial training which
enable us to train the network without paired ground-truth data, (3)
equivariant self-supervised training approach, which utilizes equivariant
structure which is prevalent in many imaging applications, and only requires
measurement data. Our numerical results demonstrate the effectiveness of our
approach in X-ray CT imaging task, showing that our networks achieve similar
reconstruction accuracies as the full-batch LPD, while require only a fraction
of the computation.
Related papers
- Self-Supervised Dual Contouring [30.9409064656302]
We propose a self-supervised training scheme for the Neural Dual Contouring meshing framework.
We use two novel self-supervised loss functions that encourage consistency between distances to the generated mesh.
We demonstrate that our self-supervised losses improve meshing performance in the single-view reconstruction task.
arXiv Detail & Related papers (2024-05-28T12:44:28Z) - Transfer Learning with Reconstruction Loss [12.906500431427716]
This paper proposes a novel approach for model training by adding into the model an additional reconstruction stage associated with a new reconstruction loss.
The proposed approach encourages the learned features to be general and transferable, and therefore can be readily used for efficient transfer learning.
For numerical simulations, three applications are studied: transfer learning on classifying MNIST handwritten digits, the device-to-device wireless network power allocation, and the multiple-input-single-output network downlink beamforming and localization.
arXiv Detail & Related papers (2024-03-31T00:22:36Z) - Cross-domain and Cross-dimension Learning for Image-to-Graph
Transformers [50.576354045312115]
Direct image-to-graph transformation is a challenging task that solves object detection and relationship prediction in a single model.
We introduce a set of methods enabling cross-domain and cross-dimension transfer learning for image-to-graph transformers.
We demonstrate our method's utility in cross-domain and cross-dimension experiments, where we pretrain our models on 2D satellite images before applying them to vastly different target domains in 2D and 3D.
arXiv Detail & Related papers (2024-03-11T10:48:56Z) - On the training and generalization of deep operator networks [11.159056906971983]
We present a novel training method for deep operator networks (DeepONets)
DeepONets are constructed by two sub-networks.
We establish the width error estimate in terms of input data.
arXiv Detail & Related papers (2023-09-02T21:10:45Z) - PRSNet: A Masked Self-Supervised Learning Pedestrian Re-Identification
Method [2.0411082897313984]
This paper designs a pre-task of mask reconstruction to obtain a pre-training model with strong robustness.
The training optimization of the network is performed by improving the triplet loss based on the centroid.
This method achieves about 5% higher mAP on Marker1501 and CUHK03 data than existing self-supervised learning pedestrian re-identification methods.
arXiv Detail & Related papers (2023-03-11T07:20:32Z) - Unsupervised Domain-adaptive Hash for Networks [81.49184987430333]
Domain-adaptive hash learning has enjoyed considerable success in the computer vision community.
We develop an unsupervised domain-adaptive hash learning method for networks, dubbed UDAH.
arXiv Detail & Related papers (2021-08-20T12:09:38Z) - Multi-Agent Semi-Siamese Training for Long-tail and Shallow Face
Learning [54.13876727413492]
In many real-world scenarios of face recognition, the depth of training dataset is shallow, which means only two face images are available for each ID.
With the non-uniform increase of samples, such issue is converted to a more general case, a.k.a a long-tail face learning.
Based on the Semi-Siamese Training (SST), we introduce an advanced solution, named Multi-Agent Semi-Siamese Training (MASST)
MASST includes a probe network and multiple gallery agents, the former aims to encode the probe features, and the latter constitutes a stack of
arXiv Detail & Related papers (2021-05-10T04:57:32Z) - Learning Neural Network Subspaces [74.44457651546728]
Recent observations have advanced our understanding of the neural network optimization landscape.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
arXiv Detail & Related papers (2021-02-20T23:26:58Z) - Online Exemplar Fine-Tuning for Image-to-Image Translation [32.556050882376965]
Existing techniques to solve exemplar-based image-to-image translation within deep convolutional neural networks (CNNs) generally require a training phase to optimize the network parameters.
We propose a novel framework, for the first time, to solve exemplar-based translation through an online optimization given an input image pair.
Our framework does not require the off-line training phase, which has been the main challenge of existing methods, but the pre-trained networks to enable optimization in online.
arXiv Detail & Related papers (2020-11-18T15:13:16Z) - Pre-Trained Models for Heterogeneous Information Networks [57.78194356302626]
We propose a self-supervised pre-training and fine-tuning framework, PF-HIN, to capture the features of a heterogeneous information network.
PF-HIN consistently and significantly outperforms state-of-the-art alternatives on each of these tasks, on four datasets.
arXiv Detail & Related papers (2020-07-07T03:36:28Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.