A Deep Learning based Fast Signed Distance Map Generation
- URL: http://arxiv.org/abs/2005.12662v1
- Date: Tue, 26 May 2020 12:36:19 GMT
- Title: A Deep Learning based Fast Signed Distance Map Generation
- Authors: Zihao Wang, Clair Vandersteen, Thomas Demarcy, Dan Gnansia, Charles
Raffaelli, Nicolas Guevara, Herv\'e Delingette
- Abstract summary: Signed distance map (SDM) is a common representation of surfaces in medical image analysis and machine learning.
In this paper, we propose a learning based SDM generation neural network which is demonstrated on a tridimensional cochlea shape model parameterized by 4 shape parameters.
The proposed SDM Neural Network generates a cochlea signed distance map depending on four input parameters and we show that the deep learning approach leads to a 60 fold improvement in the time of computation.
- Score: 4.298890193377769
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Signed distance map (SDM) is a common representation of surfaces in medical
image analysis and machine learning. The computational complexity of SDM for 3D
parametric shapes is often a bottleneck in many applications, thus limiting
their interest. In this paper, we propose a learning based SDM generation
neural network which is demonstrated on a tridimensional cochlea shape model
parameterized by 4 shape parameters. The proposed SDM Neural Network generates
a cochlea signed distance map depending on four input parameters and we show
that the deep learning approach leads to a 60 fold improvement in the time of
computation compared to more classical SDM generation methods. Therefore, the
proposed approach achieves a good trade-off between accuracy and efficiency.
Related papers
- Geometry-Informed Neural Operator for Large-Scale 3D PDEs [76.06115572844882]
We propose the geometry-informed neural operator (GINO) to learn the solution operator of large-scale partial differential equations.
We successfully trained GINO to predict the pressure on car surfaces using only five hundred data points.
arXiv Detail & Related papers (2023-09-01T16:59:21Z) - Quantitative Susceptibility Mapping through Model-based Deep Image Prior
(MoDIP) [10.230055884828445]
We propose a training-free model-based unsupervised method called MoDIP (Model-based Deep Image Prior)
MoDIP comprises a small, untrained network and a Data Fidelity Optimization (DFO) module.
It is 33% more computationally efficient and runs 4 times faster than conventional DIP-based approaches.
arXiv Detail & Related papers (2023-08-18T11:07:39Z) - GraphCSPN: Geometry-Aware Depth Completion via Dynamic GCNs [49.55919802779889]
We propose a Graph Convolution based Spatial Propagation Network (GraphCSPN) as a general approach for depth completion.
In this work, we leverage convolution neural networks as well as graph neural networks in a complementary way for geometric representation learning.
Our method achieves the state-of-the-art performance, especially when compared in the case of using only a few propagation steps.
arXiv Detail & Related papers (2022-10-19T17:56:03Z) - GLEAM: Greedy Learning for Large-Scale Accelerated MRI Reconstruction [50.248694764703714]
Unrolled neural networks have recently achieved state-of-the-art accelerated MRI reconstruction.
These networks unroll iterative optimization algorithms by alternating between physics-based consistency and neural-network based regularization.
We propose Greedy LEarning for Accelerated MRI reconstruction, an efficient training strategy for high-dimensional imaging settings.
arXiv Detail & Related papers (2022-07-18T06:01:29Z) - DeepSSN: a deep convolutional neural network to assess spatial scene
similarity [11.608756441376544]
We propose a deep convolutional neural network, namely Deep Spatial Scene Network (DeepSSN), to better assess the spatial scene similarity.
We develop a prototype spatial scene search system using the proposed DeepSSN, in which the users input spatial query via sketch maps.
The proposed model is validated using multi-source conflated map data including 131,300 labeled scene samples after data augmentation.
arXiv Detail & Related papers (2022-02-07T23:53:20Z) - Model-inspired Deep Learning for Light-Field Microscopy with Application
to Neuron Localization [27.247818386065894]
We propose a model-inspired deep learning approach to perform fast and robust 3D localization of sources using light-field microscopy images.
This is achieved by developing a deep network that efficiently solves a convolutional sparse coding problem.
Experiments on localization of mammalian neurons from light-fields show that the proposed approach simultaneously provides enhanced performance, interpretability and efficiency.
arXiv Detail & Related papers (2021-03-10T16:24:47Z) - TSGCNet: Discriminative Geometric Feature Learning with Two-Stream
GraphConvolutional Network for 3D Dental Model Segmentation [141.2690520327948]
We propose a two-stream graph convolutional network (TSGCNet) to learn multi-view information from different geometric attributes.
We evaluate our proposed TSGCNet on a real-patient dataset of dental models acquired by 3D intraoral scanners.
arXiv Detail & Related papers (2020-12-26T08:02:56Z) - Multi-view Depth Estimation using Epipolar Spatio-Temporal Networks [87.50632573601283]
We present a novel method for multi-view depth estimation from a single video.
Our method achieves temporally coherent depth estimation results by using a novel Epipolar Spatio-Temporal (EST) transformer.
To reduce the computational cost, inspired by recent Mixture-of-Experts models, we design a compact hybrid network.
arXiv Detail & Related papers (2020-11-26T04:04:21Z) - Towards Reading Beyond Faces for Sparsity-Aware 4D Affect Recognition [55.15661254072032]
We present a sparsity-aware deep network for automatic 4D facial expression recognition (FER)
We first propose a novel augmentation method to combat the data limitation problem for deep learning.
We then present a sparsity-aware deep network to compute the sparse representations of convolutional features over multi-views.
arXiv Detail & Related papers (2020-02-08T13:09:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.