Neural Modulation Fields for Conditional Cone Beam Neural Tomography
- URL: http://arxiv.org/abs/2307.08351v1
- Date: Mon, 17 Jul 2023 09:41:01 GMT
- Title: Neural Modulation Fields for Conditional Cone Beam Neural Tomography
- Authors: Samuele Papa, David M. Knigge, Riccardo Valperga, Nikita Moriakov,
Miltos Kofinas, Jan-Jakob Sonke, Efstratios Gavves
- Abstract summary: Cone Beam Neural Tomography (CondCBNT) shows improved performance for both high and low numbers of available projections on noise-free and noisy data.
We propose a novel conditioning method where local modulations are modeled per patient as a field over the input domain through a Neural Modulation Field (NMF)
- Score: 18.721488634071193
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Conventional Computed Tomography (CT) methods require large numbers of
noise-free projections for accurate density reconstructions, limiting their
applicability to the more complex class of Cone Beam Geometry CT (CBCT)
reconstruction. Recently, deep learning methods have been proposed to overcome
these limitations, with methods based on neural fields (NF) showing strong
performance, by approximating the reconstructed density through a
continuous-in-space coordinate based neural network. Our focus is on improving
such methods, however, unlike previous work, which requires training an NF from
scratch for each new set of projections, we instead propose to leverage
anatomical consistencies over different scans by training a single conditional
NF on a dataset of projections. We propose a novel conditioning method where
local modulations are modeled per patient as a field over the input domain
through a Neural Modulation Field (NMF). The resulting Conditional Cone Beam
Neural Tomography (CondCBNT) shows improved performance for both high and low
numbers of available projections on noise-free and noisy data.
Related papers
- Intensity Field Decomposition for Tissue-Guided Neural Tomography [30.81166574148901]
This article introduces a novel sparse-view CBCT reconstruction method, which empowers the neural field with human tissue regularization.
Our approach, termed tissue-guided neural tomography (TNT), is motivated by the distinct intensity differences between bone and soft tissue in CBCT.
Our method achieves comparable reconstruction quality with fewer projections and faster convergence compared to state-of-the-art neural rendering based methods.
arXiv Detail & Related papers (2024-11-01T06:31:53Z) - CoCPF: Coordinate-based Continuous Projection Field for Ill-Posed Inverse Problem in Imaging [78.734927709231]
Sparse-view computed tomography (SVCT) reconstruction aims to acquire CT images based on sparsely-sampled measurements.
Due to ill-posedness, implicit neural representation (INR) techniques may leave considerable holes'' (i.e., unmodeled spaces) in their fields, leading to sub-optimal results.
We propose the Coordinate-based Continuous Projection Field (CoCPF), which aims to build hole-free representation fields for SVCT reconstruction.
arXiv Detail & Related papers (2024-06-21T08:38:30Z) - Hopfield-Enhanced Deep Neural Networks for Artifact-Resilient Brain
State Decoding [0.0]
We propose a two-stage computational framework combining Hopfield Networks for artifact data preprocessing with Conal Neural Networks (CNNs) for classification of brain states in rat neural recordings under different levels of anesthesia.
Performance across various levels of data compression and noise intensities showed that our framework can effectively mitigate artifacts, allowing the model to reach parity with the clean-data CNN at lower noise levels.
arXiv Detail & Related papers (2023-11-06T15:08:13Z) - From NeurODEs to AutoencODEs: a mean-field control framework for
width-varying Neural Networks [68.8204255655161]
We propose a new type of continuous-time control system, called AutoencODE, based on a controlled field that drives dynamics.
We show that many architectures can be recovered in regions where the loss function is locally convex.
arXiv Detail & Related papers (2023-07-05T13:26:17Z) - Low-Resource Music Genre Classification with Cross-Modal Neural Model
Reprogramming [129.4950757742912]
We introduce a novel method for leveraging pre-trained models for low-resource (music) classification based on the concept of Neural Model Reprogramming (NMR)
NMR aims at re-purposing a pre-trained model from a source domain to a target domain by modifying the input of a frozen pre-trained model.
Experimental results suggest that a neural model pre-trained on large-scale datasets can successfully perform music genre classification by using this reprogramming method.
arXiv Detail & Related papers (2022-11-02T17:38:33Z) - TT-NF: Tensor Train Neural Fields [88.49847274083365]
We introduce a novel low-rank representation termed Train Neural Fields (TT-NF) for learning fields on regular grids.
We analyze the effect of low-rank compression on the downstream task quality metrics.
arXiv Detail & Related papers (2022-09-30T15:17:39Z) - Ultrasound Signal Processing: From Models to Deep Learning [64.56774869055826]
Medical ultrasound imaging relies heavily on high-quality signal processing to provide reliable and interpretable image reconstructions.
Deep learning based methods, which are optimized in a data-driven fashion, have gained popularity.
A relatively new paradigm combines the power of the two: leveraging data-driven deep learning, as well as exploiting domain knowledge.
arXiv Detail & Related papers (2022-04-09T13:04:36Z) - Convolutional Neural Network to Restore Low-Dose Digital Breast
Tomosynthesis Projections in a Variance Stabilization Domain [15.149874383250236]
convolution neural network (CNN) proposed to restore low-dose (LD) projections to image quality equivalent to a standard full-dose (FD) acquisition.
Network achieved superior results in terms of the mean squared error (MNSE), normalized training time and noise spatial correlation compared with networks trained with traditional data-driven methods.
arXiv Detail & Related papers (2022-03-22T13:31:47Z) - LocalDrop: A Hybrid Regularization for Deep Neural Networks [98.30782118441158]
We propose a new approach for the regularization of neural networks by the local Rademacher complexity called LocalDrop.
A new regularization function for both fully-connected networks (FCNs) and convolutional neural networks (CNNs) has been developed based on the proposed upper bound of the local Rademacher complexity.
arXiv Detail & Related papers (2021-03-01T03:10:11Z) - Noise Optimization for Artificial Neural Networks [0.973490996330539]
We propose a new technique to compute the pathwise gradient estimate with respect to the standard deviation of the Gaussian noise added to each neuron of the ANN.
In numerical experiments, our proposed method can achieve significant performance improvement on robustness of several popular ANN structures.
arXiv Detail & Related papers (2021-02-06T08:30:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.