Data-driven generation of plausible tissue geometries for realistic
photoacoustic image synthesis
- URL: http://arxiv.org/abs/2103.15510v1
- Date: Mon, 29 Mar 2021 11:30:18 GMT
- Title: Data-driven generation of plausible tissue geometries for realistic
photoacoustic image synthesis
- Authors: Melanie Schellenberg, Janek Gr\"ohl, Kris Dreher, Niklas Holzwarth,
Minu D. Tizabi, Alexander Seitel, Lena Maier-Hein
- Abstract summary: Photoacoustic tomography (PAT) has the potential to recover morphological and functional tissue properties.
We propose a novel approach to PAT data simulation, which we refer to as "learning to simulate"
We leverage the concept of Generative Adversarial Networks (GANs) trained on semantically annotated medical imaging data to generate plausible tissue geometries.
- Score: 53.65837038435433
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Photoacoustic tomography (PAT) has the potential to recover morphological and
functional tissue properties such as blood oxygenation with high spatial
resolution and in an interventional setting. However, decades of research
invested in solving the inverse problem of recovering clinically relevant
tissue properties from spectral measurements have failed to produce solutions
that can quantify tissue parameters robustly in a clinical setting. Previous
attempts to address the limitations of model-based approaches with machine
learning were hampered by the absence of labeled reference data needed for
supervised algorithm training. While this bottleneck has been tackled by
simulating training data, the domain gap between real and simulated images
remains a huge unsolved challenge. As a first step to address this bottleneck,
we propose a novel approach to PAT data simulation, which we refer to as
"learning to simulate". Our approach involves subdividing the challenge of
generating plausible simulations into two disjoint problems: (1) Probabilistic
generation of realistic tissue morphology, represented by semantic segmentation
maps and (2) pixel-wise assignment of corresponding optical and acoustic
properties. In the present work, we focus on the first challenge. Specifically,
we leverage the concept of Generative Adversarial Networks (GANs) trained on
semantically annotated medical imaging data to generate plausible tissue
geometries. According to an initial in silico feasibility study our approach is
well-suited for contributing to realistic PAT image synthesis and could thus
become a fundamental step for deep learning-based quantitative PAT.
Related papers
- Neurovascular Segmentation in sOCT with Deep Learning and Synthetic Training Data [4.5276169699857505]
This study demonstrates a synthesis engine for neurovascular segmentation in serial-section optical coherence tomography images.
Our approach comprises two phases: label synthesis and label-to-image transformation.
We demonstrate the efficacy of the former by comparing it to several more realistic sets of training labels, and the latter by an ablation study of synthetic noise and artifact models.
arXiv Detail & Related papers (2024-07-01T16:09:07Z) - DPER: Diffusion Prior Driven Neural Representation for Limited Angle and Sparse View CT Reconstruction [45.00528216648563]
Diffusion Prior Driven Neural Representation (DPER) is an unsupervised framework designed to address the exceptionally ill-posed CT reconstruction inverse problems.
DPER adopts the Half Quadratic Splitting (HQS) algorithm to decompose the inverse problem into data fidelity and distribution prior sub-problems.
We conduct comprehensive experiments to evaluate the performance of DPER on LACT and ultra-SVCT reconstruction with two public datasets.
arXiv Detail & Related papers (2024-04-27T12:55:13Z) - PINQI: An End-to-End Physics-Informed Approach to Learned Quantitative MRI Reconstruction [0.7199733380797579]
Quantitative Magnetic Resonance Imaging (qMRI) enables the reproducible measurement of biophysical parameters in tissue.
The challenge lies in solving a nonlinear, ill-posed inverse problem to obtain desired tissue parameter maps from acquired raw data.
We propose PINQI, a novel qMRI reconstruction method that integrates the knowledge about the signal, acquisition model, and learned regularization into a single end-to-end trainable neural network.
arXiv Detail & Related papers (2023-06-19T15:37:53Z) - Unsupervised Domain Transfer with Conditional Invertible Neural Networks [83.90291882730925]
We propose a domain transfer approach based on conditional invertible neural networks (cINNs)
Our method inherently guarantees cycle consistency through its invertible architecture, and network training can efficiently be conducted with maximum likelihood.
Our method enables the generation of realistic spectral data and outperforms the state of the art on two downstream classification tasks.
arXiv Detail & Related papers (2023-03-17T18:00:27Z) - OADAT: Experimental and Synthetic Clinical Optoacoustic Data for
Standardized Image Processing [62.993663757843464]
Optoacoustic (OA) imaging is based on excitation of biological tissues with nanosecond-duration laser pulses followed by detection of ultrasound waves generated via light-absorption-mediated thermoelastic expansion.
OA imaging features a powerful combination between rich optical contrast and high resolution in deep tissues.
No standardized datasets generated with different types of experimental set-up and associated processing methods are available to facilitate advances in broader applications of OA in clinical settings.
arXiv Detail & Related papers (2022-06-17T08:11:26Z) - Deep Learning for Ultrasound Speed-of-Sound Reconstruction: Impacts of
Training Data Diversity on Stability and Robustness [7.909848251752742]
We propose a new simulation setup for training data generation based on Tomosynthesis images.
We studied the sensitivity of the trained network to different simulation parameters.
We showed that the network trained with the joint set of data is more stable on out-of-domain simulated data as well as measured phantom data.
arXiv Detail & Related papers (2022-02-01T11:09:35Z) - InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal
Artifact Reduction in CT Images [53.4351366246531]
We construct a novel interpretable dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded.
We analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance.
arXiv Detail & Related papers (2021-12-23T15:52:37Z) - Sharp-GAN: Sharpness Loss Regularized GAN for Histopathology Image
Synthesis [65.47507533905188]
Conditional generative adversarial networks have been applied to generate synthetic histopathology images.
We propose a sharpness loss regularized generative adversarial network to synthesize realistic histopathology images.
arXiv Detail & Related papers (2021-10-27T18:54:25Z) - Deep Bayesian Active Learning for Accelerating Stochastic Simulation [74.58219903138301]
Interactive Neural Process (INP) is a deep active learning framework for simulations and with active learning approaches.
For active learning, we propose a novel acquisition function, Latent Information Gain (LIG), calculated in the latent space of NP based models.
The results demonstrate STNP outperforms the baselines in the learning setting and LIG achieves the state-of-the-art for active learning.
arXiv Detail & Related papers (2021-06-05T01:31:51Z) - Invertible Neural Networks for Uncertainty Quantification in
Photoacoustic Imaging [22.690971184202944]
In this work, we present a new approach for handling this specific type of uncertainty by leveraging the concept of conditional invertible neural networks (cINNs)
Specifically, we propose going beyond commonly used point estimates for tissue oxygenation and converting single-pixel initial pressure spectra to the full posterior probability density.
Based on the presented architecture, we demonstrate two use cases which leverage this information to not only detect and quantify but also to compensate for uncertainties.
arXiv Detail & Related papers (2020-11-10T14:17:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.