A Skull-Adaptive Framework for AI-Based 3D Transcranial Focused Ultrasound Simulation
- URL: http://arxiv.org/abs/2505.12998v1
- Date: Mon, 19 May 2025 11:37:51 GMT
- Title: A Skull-Adaptive Framework for AI-Based 3D Transcranial Focused Ultrasound Simulation
- Authors: Vinkle Srivastav, Juliette Puel, Jonathan Vappou, Elijah Van Houten, Paolo Cabras, Nicolas Padoy,
- Abstract summary: Transcranial focused ultrasound (tFUS) is an emerging modality for non-invasive brain stimulation and therapeutic intervention.<n>TFUScapes is the first large-scale, high-resolution dataset of tFUS simulations through anatomically realistic human skulls.<n>DeepTFUS is a deep learning model that estimates normalized pressure fields directly from input 3D CT volumes and transducer position.
- Score: 1.662610796043078
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Transcranial focused ultrasound (tFUS) is an emerging modality for non-invasive brain stimulation and therapeutic intervention, offering millimeter-scale spatial precision and the ability to target deep brain structures. However, the heterogeneous and anisotropic nature of the human skull introduces significant distortions to the propagating ultrasound wavefront, which require time-consuming patient-specific planning and corrections using numerical solvers for accurate targeting. To enable data-driven approaches in this domain, we introduce TFUScapes, the first large-scale, high-resolution dataset of tFUS simulations through anatomically realistic human skulls derived from T1-weighted MRI images. We have developed a scalable simulation engine pipeline using the k-Wave pseudo-spectral solver, where each simulation returns a steady-state pressure field generated by a focused ultrasound transducer placed at realistic scalp locations. In addition to the dataset, we present DeepTFUS, a deep learning model that estimates normalized pressure fields directly from input 3D CT volumes and transducer position. The model extends a U-Net backbone with transducer-aware conditioning, incorporating Fourier-encoded position embeddings and MLP layers to create global transducer embeddings. These embeddings are fused with U-Net encoder features via feature-wise modulation, dynamic convolutions, and cross-attention mechanisms. The model is trained using a combination of spatially weighted and gradient-sensitive loss functions, enabling it to approximate high-fidelity wavefields. The TFUScapes dataset is publicly released to accelerate research at the intersection of computational acoustics, neurotechnology, and deep learning. The project page is available at https://github.com/CAMMA-public/TFUScapes.
Related papers
- SUFFICIENT: A scan-specific unsupervised deep learning framework for high-resolution 3D isotropic fetal brain MRI reconstruction [7.268308489093152]
We propose an unsupervised iterative SVR-SRR framework for isotropic HR volume reconstruction.<n>A decoding network embedded within a deep image prior framework is incorporated with a comprehensive image degradation model to produce the high-resolution (HR) volume.<n>Experiments conducted on large-magnitude motion-corrupted simulation data and clinical data demonstrate the superior performance of the proposed framework.
arXiv Detail & Related papers (2025-05-23T04:53:59Z) - Convolutional Deep Operator Networks for Learning Nonlinear Focused Ultrasound Wave Propagation in Heterogeneous Spinal Cord Anatomy [0.0]
Focused ultrasound therapy is a promising tool for optimally targeted treatment of spinal cord injuries.<n>Current approaches rely on computer simulations to solve the governing wave propagation equations.<n>We propose a convolutional deep operator network (DeepONet) to rapidly predict FUS pressure fields in patient spinal cords.
arXiv Detail & Related papers (2024-12-20T18:03:38Z) - RMSim: Controlled Respiratory Motion Simulation on Static Patient Scans [7.575469466607952]
We present a novel 3D Seq2Seq deep learning respiratory motion simulator (RMSim) that learns from 4D-CT images.
10-phase 4D-CTs of 140 internal patients were used to train and test RMSim.
We validated our RMSim output with both private and public benchmark datasets.
arXiv Detail & Related papers (2023-01-26T21:20:14Z) - Transform Once: Efficient Operator Learning in Frequency Domain [69.74509540521397]
We study deep neural networks designed to harness the structure in frequency domain for efficient learning of long-range correlations in space or time.
This work introduces a blueprint for frequency domain learning through a single transform: transform once (T1)
arXiv Detail & Related papers (2022-11-26T01:56:05Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - Focused Decoding Enables 3D Anatomical Detection by Transformers [64.36530874341666]
We propose a novel Detection Transformer for 3D anatomical structure detection, dubbed Focused Decoder.
Focused Decoder leverages information from an anatomical region atlas to simultaneously deploy query anchors and restrict the cross-attention's field of view.
We evaluate our proposed approach on two publicly available CT datasets and demonstrate that Focused Decoder not only provides strong detection results and thus alleviates the need for a vast amount of annotated data but also exhibits exceptional and highly intuitive explainability of results via attention weights.
arXiv Detail & Related papers (2022-07-21T22:17:21Z) - Ultrasound Signal Processing: From Models to Deep Learning [64.56774869055826]
Medical ultrasound imaging relies heavily on high-quality signal processing to provide reliable and interpretable image reconstructions.
Deep learning based methods, which are optimized in a data-driven fashion, have gained popularity.
A relatively new paradigm combines the power of the two: leveraging data-driven deep learning, as well as exploiting domain knowledge.
arXiv Detail & Related papers (2022-04-09T13:04:36Z) - Revisiting 3D Context Modeling with Supervised Pre-training for
Universal Lesion Detection in CT Slices [48.85784310158493]
We propose a Modified Pseudo-3D Feature Pyramid Network (MP3D FPN) to efficiently extract 3D context enhanced 2D features for universal lesion detection in CT slices.
With the novel pre-training method, the proposed MP3D FPN achieves state-of-the-art detection performance on the DeepLesion dataset.
The proposed 3D pre-trained weights can potentially be used to boost the performance of other 3D medical image analysis tasks.
arXiv Detail & Related papers (2020-12-16T07:11:16Z) - Impact of Spherical Coordinates Transformation Pre-processing in Deep
Convolution Neural Networks for Brain Tumor Segmentation and Survival
Prediction [0.0]
We propose a novel method aimed to feed Deep Convolutional Neural Networks (DCNN) with spherical space transformed input data.
In this work, the spherical coordinates transformation has been applied as a preprocessing method.
The LesionEncoder framework has been applied to automatically extract features from DCNN models, achieving 0.586 accuracy of OS prediction.
arXiv Detail & Related papers (2020-10-27T00:33:03Z) - 4D Spatio-Temporal Convolutional Networks for Object Position Estimation
in OCT Volumes [69.62333053044712]
3D convolutional neural networks (CNNs) have shown promising performance for pose estimation of a marker object using single OCT images.
We extend 3D CNNs to 4D-temporal CNNs to evaluate the impact of additional temporal information for marker object tracking.
arXiv Detail & Related papers (2020-07-02T12:02:20Z) - A Hybrid 3DCNN and 3DC-LSTM based model for 4D Spatio-temporal fMRI
data: An ABIDE Autism Classification study [0.0]
We introduce an end-to-end algorithm capable of extracting features from full 4-D data using 3-D CNNs and 3-D Magnetical LSTMs.
Our results show that the proposed model achieves state of the art results on single sites with F1-scores of 0.78 and 0.7 on NYU and UM sites, respectively.
arXiv Detail & Related papers (2020-02-14T11:52:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.