A Joint Convolutional and Spatial Quad-Directional LSTM Network for
Phase Unwrapping
- URL: http://arxiv.org/abs/2010.13268v1
- Date: Mon, 26 Oct 2020 01:04:19 GMT
- Title: A Joint Convolutional and Spatial Quad-Directional LSTM Network for
Phase Unwrapping
- Authors: Malsha V. Perera, Ashwin De Silva
- Abstract summary: We introduce a novel Convolutional Neural Network (CNN) that incorporates a Spatial Quad-Directional Long Short Term Memory (SQD-LSTM) for phase unwrapping.
The proposed network is found to be performing better than the existing methods under severe noise conditions.
- Score: 7.716156977428555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Phase unwrapping is a classical ill-posed problem which aims to recover the
true phase from wrapped phase. In this paper, we introduce a novel
Convolutional Neural Network (CNN) that incorporates a Spatial Quad-Directional
Long Short Term Memory (SQD-LSTM) for phase unwrapping, by formulating it as a
regression problem. Incorporating SQD-LSTM can circumvent the typical CNNs'
inherent difficulty of learning global spatial dependencies which are vital
when recovering the true phase. Furthermore, we employ a problem specific
composite loss function to train this network. The proposed network is found to
be performing better than the existing methods under severe noise conditions
(Normalized Root Mean Square Error of 1.3 % at SNR = 0 dB) while spending a
significantly less computational time (0.054 s). The network also does not
require a large scale dataset during training, thus making it ideal for
applications with limited data that require fast and accurate phase unwrapping.
Related papers
- Hyperdimensional Computing Empowered Federated Foundation Model over Wireless Networks for Metaverse [56.384390765357004]
We propose an integrated federated split learning and hyperdimensional computing framework for emerging foundation models.
This novel approach reduces communication costs, computation load, and privacy risks, making it suitable for resource-constrained edge devices in the Metaverse.
arXiv Detail & Related papers (2024-08-26T17:03:14Z) - An Improved Time Feedforward Connections Recurrent Neural Networks [3.0965505512285967]
Recurrent Neural Networks (RNNs) have been widely applied to deal with temporal problems, such as flood forecasting and financial data processing.
Traditional RNNs models amplify the gradient issue due to the strict time serial dependency.
An improved Time Feedforward Connections Recurrent Neural Networks (TFC-RNNs) model was first proposed to address the gradient issue.
A novel cell structure named Single Gate Recurrent Unit (SGRU) was presented to reduce the number of parameters for RNNs cell.
arXiv Detail & Related papers (2022-11-03T09:32:39Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - Go Beyond Multiple Instance Neural Networks: Deep-learning Models based
on Local Pattern Aggregation [0.0]
convolutional neural networks (CNNs) have brought breakthroughs in processing clinical electrocardiograms (ECGs) and speaker-independent speech.
In this paper, we propose local pattern aggregation-based deep-learning models to effectively deal with both problems.
The novel network structure, called LPANet, has cropping and aggregation operations embedded into it.
arXiv Detail & Related papers (2022-05-28T13:18:18Z) - Truncated tensor Schatten p-norm based approach for spatiotemporal
traffic data imputation with complicated missing patterns [77.34726150561087]
We introduce four complicated missing patterns, including missing and three fiber-like missing cases according to the mode-drivenn fibers.
Despite nonity of the objective function in our model, we derive the optimal solutions by integrating alternating data-mputation method of multipliers.
arXiv Detail & Related papers (2022-05-19T08:37:56Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - SlimFL: Federated Learning with Superposition Coding over Slimmable
Neural Networks [56.68149211499535]
Federated learning (FL) is a key enabler for efficient communication and computing leveraging devices' distributed computing capabilities.
This paper proposes a novel learning framework by integrating FL and width-adjustable slimmable neural networks (SNNs)
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2022-03-26T15:06:13Z) - Joint Superposition Coding and Training for Federated Learning over
Multi-Width Neural Networks [52.93232352968347]
This paper aims to integrate two synergetic technologies, federated learning (FL) and width-adjustable slimmable neural network (SNN)
FL preserves data privacy by exchanging the locally trained models of mobile devices. SNNs are however non-trivial, particularly under wireless connections with time-varying channel conditions.
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2021-12-05T11:17:17Z) - Long short-term relevance learning [0.0]
An efficient sparse Bayesian training algorithm is introduced to the network architecture.
The proposed scheme automatically determines relevant neural connections and adapts accordingly.
We show that the self-regulating framework does not require prior knowledge of a suitable network architecture and size.
arXiv Detail & Related papers (2021-06-21T09:07:17Z) - Hessian Aware Quantization of Spiking Neural Networks [1.90365714903665]
Neuromorphic architecture allows massively parallel computation with variable and local bit-precisions.
Current gradient based methods of SNN training use a complex neuron model with multiple state variables.
We present a simplified neuron model that reduces the number of state variables by 4-fold while still being compatible with gradient based training.
arXiv Detail & Related papers (2021-04-29T05:27:34Z) - Ensemble long short-term memory (EnLSTM) network [0.456877715768796]
We propose an ensemble long short-term memory (EnLSTM) network, which can be trained on a small dataset and process sequential data.
The EnLSTM is proven to be the state-of-the-art model in generating well logs with a mean-square-error (MSE) reduction of 34%.
arXiv Detail & Related papers (2020-04-26T05:42:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.