Fourier-Net: Fast Image Registration with Band-limited Deformation
- URL: http://arxiv.org/abs/2211.16342v2
- Date: Thu, 6 Jul 2023 13:46:06 GMT
- Title: Fourier-Net: Fast Image Registration with Band-limited Deformation
- Authors: Xi Jia, Joseph Bartlett, Wei Chen, Siyang Song, Tianyang Zhang,
Xinxing Cheng, Wenqi Lu, Zhaowen Qiu, Jinming Duan
- Abstract summary: Unsupervised image registration commonly adopts U-Net style networks to predict dense displacement fields in the full-resolution spatial domain.
We propose the Fourier-Net, replacing the expansive path in a U-Net style network with a parameter-free model-driven decoder.
- Score: 16.894559169947055
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised image registration commonly adopts U-Net style networks to
predict dense displacement fields in the full-resolution spatial domain. For
high-resolution volumetric image data, this process is however
resource-intensive and time-consuming. To tackle this problem, we propose the
Fourier-Net, replacing the expansive path in a U-Net style network with a
parameter-free model-driven decoder. Specifically, instead of our Fourier-Net
learning to output a full-resolution displacement field in the spatial domain,
we learn its low-dimensional representation in a band-limited Fourier domain.
This representation is then decoded by our devised model-driven decoder
(consisting of a zero padding layer and an inverse discrete Fourier transform
layer) to the dense, full-resolution displacement field in the spatial domain.
These changes allow our unsupervised Fourier-Net to contain fewer parameters
and computational operations, resulting in faster inference speeds. Fourier-Net
is then evaluated on two public 3D brain datasets against various
state-of-the-art approaches. For example, when compared to a recent
transformer-based method, named TransMorph, our Fourier-Net, which only uses
2.2\% of its parameters and 6.66\% of the multiply-add operations, achieves a
0.5\% higher Dice score and an 11.48 times faster inference speed. Code is
available at \url{https://github.com/xi-jia/Fourier-Net}.
Related papers
- WiNet: Wavelet-based Incremental Learning for Efficient Medical Image Registration [68.25711405944239]
Deep image registration has demonstrated exceptional accuracy and fast inference.
Recent advances have adopted either multiple cascades or pyramid architectures to estimate dense deformation fields in a coarse-to-fine manner.
We introduce a model-driven WiNet that incrementally estimates scale-wise wavelet coefficients for the displacement/velocity field across various scales.
arXiv Detail & Related papers (2024-07-18T11:51:01Z) - FourierMamba: Fourier Learning Integration with State Space Models for Image Deraining [71.46369218331215]
Image deraining aims to remove rain streaks from rainy images and restore clear backgrounds.
We propose a new framework termed FourierMamba, which performs image deraining with Mamba in the Fourier space.
arXiv Detail & Related papers (2024-05-29T18:58:59Z) - Fourier-Net+: Leveraging Band-Limited Representation for Efficient 3D
Medical Image Registration [62.53130123397081]
U-Net style networks are commonly utilized in unsupervised image registration to predict dense displacement fields.
We first propose Fourier-Net, which replaces the costly U-Net style expansive path with a parameter-free model-driven decoder.
We then introduce Fourier-Net+, which additionally takes the band-limited spatial representation of the images as input and further reduces the number of convolutional layers in the U-Net style network's contracting path.
arXiv Detail & Related papers (2023-07-06T13:57:12Z) - Neural Fourier Filter Bank [18.52741992605852]
We present a novel method to provide efficient and highly detailed reconstructions.
Inspired by wavelets, we learn a neural field that decompose the signal both spatially and frequency-wise.
arXiv Detail & Related papers (2022-12-04T03:45:08Z) - Deep Fourier Up-Sampling [100.59885545206744]
Up-sampling in the Fourier domain is more challenging as it does not follow such a local property.
We propose a theoretically sound Deep Fourier Up-Sampling (FourierUp) to solve these issues.
arXiv Detail & Related papers (2022-10-11T06:17:31Z) - Fourier Disentangled Space-Time Attention for Aerial Video Recognition [54.80846279175762]
We present an algorithm, Fourier Activity Recognition (FAR), for UAV video activity recognition.
Our formulation uses a novel Fourier object disentanglement method to innately separate out the human agent from the background.
We have evaluated our approach on multiple UAV datasets including UAV Human RGB, UAV Human Night, Drone Action, and NEC Drone.
arXiv Detail & Related papers (2022-03-21T01:24:53Z) - Seeing Implicit Neural Representations as Fourier Series [13.216389226310987]
Implicit Neural Representations (INR) use multilayer perceptrons to represent high-frequency functions in low-dimensional problem domains.
These representations achieved state-of-the-art results on tasks related to complex 3D objects and scenes.
This work analyzes the connection between the two methods and shows that a Fourier mapped perceptron is structurally like one hidden layer SIREN.
arXiv Detail & Related papers (2021-09-01T08:40:20Z) - Global Filter Networks for Image Classification [90.81352483076323]
We present a conceptually simple yet computationally efficient architecture that learns long-term spatial dependencies in the frequency domain with log-linear complexity.
Our results demonstrate that GFNet can be a very competitive alternative to transformer-style models and CNNs in efficiency, generalization ability and robustness.
arXiv Detail & Related papers (2021-07-01T17:58:16Z) - Fourier Image Transformer [10.315102237565734]
We show that an auto-regressive image completion task is equivalent to predicting a higher resolution output given a low-resolution input.
We demonstrate the practicality of this approach in the context of computed tomography (CT) image reconstruction.
arXiv Detail & Related papers (2021-04-06T14:48:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.