On the accuracy of implicit neural representations for cardiovascular anatomies and hemodynamic fields
- URL: http://arxiv.org/abs/2510.20970v1
- Date: Thu, 23 Oct 2025 19:57:50 GMT
- Title: On the accuracy of implicit neural representations for cardiovascular anatomies and hemodynamic fields
- Authors: Jubilee Lee, Daniele E. Schiavazzi,
- Abstract summary: Implicit neural representations (INRs) have emerged as a powerful framework for knowledge representation, synthesis, and compression.<n>In this work, we assess the performance of state-of-the-art INRs for compressing hemodynamic fields and representing cardiovascular anatomies.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit neural representations (INRs, also known as neural fields) have recently emerged as a powerful framework for knowledge representation, synthesis, and compression. By encoding fields as continuous functions within the weights and biases of deep neural networks-rather than relying on voxel- or mesh-based structured or unstructured representations-INRs offer both resolution independence and high memory efficiency. However, their accuracy in domain-specific applications remains insufficiently understood. In this work, we assess the performance of state-of-the-art INRs for compressing hemodynamic fields derived from numerical simulations and for representing cardiovascular anatomies via signed distance functions. We investigate several strategies to mitigate spectral bias, including specialized activation functions, both fixed and trainable positional encoding, and linear combinations of nonlinear kernels. On realistic, space- and time-varying hemodynamic fields in the thoracic aorta, INRs achieved remarkable compression ratios of up to approximately 230, with maximum absolute errors of 1 mmHg for pressure and 5-10 cm/s for velocity, without extensive hyperparameter tuning. Across 48 thoracic aortic anatomies, the average and maximum absolute anatomical discrepancies were below 0.5 mm and 1.6 mm, respectively. Overall, the SIREN, MFN-Gabor, and MHE architectures demonstrated the best performance. Source code and data is available at https://github.com/desResLab/nrf.
Related papers
- Deep Neural Network Architectures for Electrocardiogram Classification: A Comprehensive Evaluation [7.708113178862228]
This study presents a comprehensive evaluation of deep neural network architectures for automated arrhythmia classification.<n>To address data scarcity in minority classes, the MIT-BIH Arrhythmia dataset was augmented using a Generative Adversarial Network (GAN)<n>We developed and compared four distinct architectures, including Convolutional Neural Networks (CNN), CNN combined with Long Short-Term Memory (CNN-LSTM), CNN-LSTM with Attention, and 1D Residual Networks (ResNet-1D)
arXiv Detail & Related papers (2026-02-07T06:56:50Z) - AI-Enhanced High-Density NIRS Patch for Real-Time Brain Layer Oxygenation Monitoring in Neurological Emergencies [1.554464105856087]
We introduce an AI-driven, high-density NIRS system optimized to provide real-time, layer-specific oxygenation data from the brain cortex.<n>Our system integrates high-density NIRS reflectance data with a neural network trained on MRI-based synthetic datasets.<n>In simulations, our AI-assisted NIRS demonstrated a strong correlation with actual cortical oxygenation, markedly outperforming conventional methods.
arXiv Detail & Related papers (2025-11-06T13:45:01Z) - Fractional Spike Differential Equations Neural Network with Efficient Adjoint Parameters Training [63.3991315762955]
Spiking Neural Networks (SNNs) draw inspiration from biological neurons to create realistic models for brain-like computation.<n>Most existing SNNs assume a single time constant for neuronal membrane voltage dynamics, modeled by first-order ordinary differential equations (ODEs) with Markovian characteristics.<n>We propose the Fractional SPIKE Differential Equation neural network (fspikeDE), which captures long-term dependencies in membrane voltage and spike trains through fractional-order dynamics.
arXiv Detail & Related papers (2025-07-22T18:20:56Z) - Towards a general-purpose foundation model for fMRI analysis [58.06455456423138]
We introduce NeuroSTORM, a framework that learns from 4D fMRI volumes and enables efficient knowledge transfer across diverse applications.<n>NeuroSTORM is pre-trained on 28.65 million fMRI frames (>9,000 hours) from over 50,000 subjects across multiple centers and ages 5 to 100.<n>It outperforms existing methods across five tasks: age/gender prediction, phenotype prediction, disease diagnosis, fMRI-to-image retrieval, and task-based fMRI.
arXiv Detail & Related papers (2025-06-11T23:51:01Z) - Physiological neural representation for personalised tracer kinetic parameter estimation from dynamic PET [0.7147474215053953]
We propose a physiological neural representation based on implicit neural representations (INRs) for personalized kinetic parameter estimation.<n>INRs, which learn continuous functions, allow for efficient, high-resolution parametric imaging with reduced data requirements.<n>Our findings highlight the potential of INRs for personalized, data-efficient tracer kinetic modelling, enabling applications in tumour characterization, segmentation, and prognostic assessment.
arXiv Detail & Related papers (2025-04-23T22:12:04Z) - Convolutional Deep Operator Networks for Learning Nonlinear Focused Ultrasound Wave Propagation in Heterogeneous Spinal Cord Anatomy [0.0]
Focused ultrasound therapy is a promising tool for optimally targeted treatment of spinal cord injuries.<n>Current approaches rely on computer simulations to solve the governing wave propagation equations.<n>We propose a convolutional deep operator network (DeepONet) to rapidly predict FUS pressure fields in patient spinal cords.
arXiv Detail & Related papers (2024-12-20T18:03:38Z) - Thermodynamics-informed graph neural networks for real-time simulation of digital human twins [2.6811507121199325]
This paper presents a novel methodology aimed at advancing current lines of research in soft tissue simulation.<n>The proposed approach integrates the geometric bias of graph neural networks with the physical bias derived from the imposition of a metriplectic structure.<n>Based on the adopted methodologies, we propose a model that predicts human liver responses to traction and compression loads in as little as 7.3 milliseconds.
arXiv Detail & Related papers (2024-12-16T18:01:40Z) - Deep-Unrolling Multidimensional Harmonic Retrieval Algorithms on Neuromorphic Hardware [78.17783007774295]
This paper explores the potential of conversion-based neuromorphic algorithms for highly accurate and energy-efficient single-snapshot multidimensional harmonic retrieval.<n>A novel method for converting the complex-valued convolutional layers and activations into spiking neural networks (SNNs) is developed.<n>The converted SNNs achieve almost five-fold power efficiency at moderate performance loss compared to the original CNNs.
arXiv Detail & Related papers (2024-12-05T09:41:33Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - FastSurfer-HypVINN: Automated sub-segmentation of the hypothalamus and
adjacent structures on high-resolutional brain MRI [3.869627124798774]
We introduce a novel, fast, and fully automated deep learning method named HypVINN for sub-segmentation of the hypothalamus.
We extensively validate our model with respect to segmentation accuracy, generalizability, in-session test-retest reliability, and sensitivity to replicate hypothalamic volume effects.
arXiv Detail & Related papers (2023-08-24T12:26:38Z) - Speed Limits for Deep Learning [67.69149326107103]
Recent advancement in thermodynamics allows bounding the speed at which one can go from the initial weight distribution to the final distribution of the fully trained network.
We provide analytical expressions for these speed limits for linear and linearizable neural networks.
Remarkably, given some plausible scaling assumptions on the NTK spectra and spectral decomposition of the labels -- learning is optimal in a scaling sense.
arXiv Detail & Related papers (2023-07-27T06:59:46Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Incorporating Learnable Membrane Time Constant to Enhance Learning of
Spiking Neural Networks [36.16846259899793]
Spiking Neural Networks (SNNs) have attracted enormous research interest due to temporal information processing capability, low power consumption, and high biological plausibility.
Most existing learning methods learn weights only, and require manual tuning of the membrane-related parameters that determine the dynamics of a single spiking neuron.
In this paper, we take inspiration from the observation that membrane-related parameters are different across brain regions, and propose a training algorithm that is capable of learning not only the synaptic weights but also the membrane time constants of SNNs.
arXiv Detail & Related papers (2020-07-11T14:35:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.