Hardware acceleration for ultra-fast Neural Network training on FPGA for MRF map reconstruction
- URL: http://arxiv.org/abs/2506.22156v1
- Date: Fri, 27 Jun 2025 12:09:35 GMT
- Title: Hardware acceleration for ultra-fast Neural Network training on FPGA for MRF map reconstruction
- Authors: Mattia Ricchi, Fabrizio Alfonsi, Camilla Marella, Marco Barbieri, Alessandra Retico, Leonardo Brizi, Alessandro Gabrielli, Claudia Testa,
- Abstract summary: We propose an FPGA-based NN for real-time brain parameter reconstruction from MRF data.<n>This method could enable real-time brain analysis on mobile devices, revolutionizing clinical decision-making and telemedicine.
- Score: 67.75494660740776
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Magnetic Resonance Fingerprinting (MRF) is a fast quantitative MR Imaging technique that provides multi-parametric maps with a single acquisition. Neural Networks (NNs) accelerate reconstruction but require significant resources for training. We propose an FPGA-based NN for real-time brain parameter reconstruction from MRF data. Training the NN takes an estimated 200 seconds, significantly faster than standard CPU-based training, which can be up to 250 times slower. This method could enable real-time brain analysis on mobile devices, revolutionizing clinical decision-making and telemedicine.
Related papers
- Adaptive Neural Quantum States: A Recurrent Neural Network Perspective [0.7234862895932991]
We show an Adaptive scheme to optimize Neural-network quantum states (NQS)<n>NQS are powerful neural-network ans"atzes that have emerged as promising tools for studying quantum many-body physics.
arXiv Detail & Related papers (2025-07-24T18:00:03Z) - Neuromorphic Wireless Split Computing with Resonate-and-Fire Neurons [69.73249913506042]
This paper investigates a wireless split computing architecture that employs resonate-and-fire (RF) neurons to process time-domain signals directly.<n>By resonating at tunable frequencies, RF neurons extract time-localized spectral features while maintaining low spiking activity.<n> Experimental results show that the proposed RF-SNN architecture achieves comparable accuracy to conventional LIF-SNNs and ANNs.
arXiv Detail & Related papers (2025-06-24T21:14:59Z) - Scalable Mechanistic Neural Networks for Differential Equations and Machine Learning [52.28945097811129]
We propose an enhanced neural network framework designed for scientific machine learning applications involving long temporal sequences.<n>We reduce the computational time and space complexities from cubic and quadratic with respect to the sequence length, respectively, to linear.<n>Extensive experiments demonstrate that S-MNN matches the original MNN in precision while substantially reducing computational resources.
arXiv Detail & Related papers (2024-10-08T14:27:28Z) - Accelerating SNN Training with Stochastic Parallelizable Spiking Neurons [1.7056768055368383]
Spiking neural networks (SNN) are able to learn features while using less energy, especially on neuromorphic hardware.
Most widely used neuron in deep learning is the temporal and Fire (LIF) neuron.
arXiv Detail & Related papers (2023-06-22T04:25:27Z) - MF-NeRF: Memory Efficient NeRF with Mixed-Feature Hash Table [62.164549651134465]
We propose MF-NeRF, a memory-efficient NeRF framework that employs a Mixed-Feature hash table to improve memory efficiency and reduce training time while maintaining reconstruction quality.
Our experiments with state-of-the-art Instant-NGP, TensoRF, and DVGO, indicate our MF-NeRF could achieve the fastest training time on the same GPU hardware with similar or even higher reconstruction quality.
arXiv Detail & Related papers (2023-04-25T05:44:50Z) - Online Training Through Time for Spiking Neural Networks [66.7744060103562]
Spiking neural networks (SNNs) are promising brain-inspired energy-efficient models.
Recent progress in training methods has enabled successful deep SNNs on large-scale tasks with low latency.
We propose online training through time (OTTT) for SNNs, which is derived from BPTT to enable forward-in-time learning.
arXiv Detail & Related papers (2022-10-09T07:47:56Z) - GLEAM: Greedy Learning for Large-Scale Accelerated MRI Reconstruction [50.248694764703714]
Unrolled neural networks have recently achieved state-of-the-art accelerated MRI reconstruction.
These networks unroll iterative optimization algorithms by alternating between physics-based consistency and neural-network based regularization.
We propose Greedy LEarning for Accelerated MRI reconstruction, an efficient training strategy for high-dimensional imaging settings.
arXiv Detail & Related papers (2022-07-18T06:01:29Z) - ERNAS: An Evolutionary Neural Architecture Search for Magnetic Resonance
Image Reconstructions [0.688204255655161]
A popular approach to accelerated MRI is to undersample the k-space data.
While undersampling speeds up the scan procedure, it generates artifacts in the images, and advanced reconstruction algorithms are needed to produce artifact-free images.
In this work, MRI reconstruction from undersampled data was carried out using an optimized neural network using a novel evolutionary neural architecture search algorithm.
arXiv Detail & Related papers (2022-06-15T03:42:18Z) - Scale-Equivariant Unrolled Neural Networks for Data-Efficient
Accelerated MRI Reconstruction [33.82162420709648]
We propose modeling the proximal operators of unrolled neural networks with scale-equivariant convolutional neural networks.
Our approach demonstrates strong improvements over the state-of-the-art unrolled neural networks under the same memory constraints.
arXiv Detail & Related papers (2022-04-21T23:29:52Z) - Towards performant and reliable undersampled MR reconstruction via
diffusion model sampling [67.73698021297022]
DiffuseRecon is a novel diffusion model-based MR reconstruction method.
It guides the generation process based on the observed signals.
It does not require additional training on specific acceleration factors.
arXiv Detail & Related papers (2022-03-08T02:25:38Z) - Real-Time EMG Signal Classification via Recurrent Neural Networks [2.66418345185993]
We use a set of recurrent neural network-based architectures to increase the classification accuracy and reduce the prediction delay time.
The performances of these architectures are compared and in general outperform other state-of-the-art methods by achieving 96% classification accuracy in 600 msec.
arXiv Detail & Related papers (2021-09-13T02:36:44Z) - Learning Bloch Simulations for MR Fingerprinting by Invertible Neural
Networks [0.8399688944263843]
Intrepid neural networks (INNs) might be a feasible alternative to the current solely backward-based NNs for MRF reconstruction.
INNs might be a feasible alternative to the current solely backward-based NNs for MRF reconstruction.
arXiv Detail & Related papers (2020-08-10T14:09:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.