Tunable Image Quality Control of 3-D Ultrasound using Switchable
CycleGAN
- URL: http://arxiv.org/abs/2112.02896v1
- Date: Mon, 6 Dec 2021 09:40:16 GMT
- Title: Tunable Image Quality Control of 3-D Ultrasound using Switchable
CycleGAN
- Authors: Jaeyoung Huh, Shujaat Khan, Sungjin Choi, Dongkuk Shin, Eun Sun Lee,
Jong Chul Ye
- Abstract summary: A 3-D US imaging system can visualize a volume along three axial planes.
The 3-D US has an inherent limitation in resolution compared to the 2-D US.
We propose a novel unsupervised deep learning approach to improve 3-D US image quality.
- Score: 25.593462273575625
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In contrast to 2-D ultrasound (US) for uniaxial plane imaging, a 3-D US
imaging system can visualize a volume along three axial planes. This allows for
a full view of the anatomy, which is useful for gynecological (GYN) and
obstetrical (OB) applications. Unfortunately, the 3-D US has an inherent
limitation in resolution compared to the 2-D US. In the case of 3-D US with a
3-D mechanical probe, for example, the image quality is comparable along the
beam direction, but significant deterioration in image quality is often
observed in the other two axial image planes. To address this, here we propose
a novel unsupervised deep learning approach to improve 3-D US image quality. In
particular, using {\em unmatched} high-quality 2-D US images as a reference, we
trained a recently proposed switchable CycleGAN architecture so that every
mapping plane in 3-D US can learn the image quality of 2-D US images. Thanks to
the switchable architecture, our network can also provide real-time control of
image enhancement level based on user preference, which is ideal for a
user-centric scanner setup. Extensive experiments with clinical evaluation
confirm that our method offers significantly improved image quality as well
user-friendly flexibility.
Related papers
- Generative Enhancement for 3D Medical Images [74.17066529847546]
We propose GEM-3D, a novel generative approach to the synthesis of 3D medical images.
Our method begins with a 2D slice, noted as the informed slice to serve the patient prior, and propagates the generation process using a 3D segmentation mask.
By decomposing the 3D medical images into masks and patient prior information, GEM-3D offers a flexible yet effective solution for generating versatile 3D images.
arXiv Detail & Related papers (2024-03-19T15:57:04Z) - On the Localization of Ultrasound Image Slices within Point Distribution
Models [84.27083443424408]
Thyroid disorders are most commonly diagnosed using high-resolution Ultrasound (US)
Longitudinal tracking is a pivotal diagnostic protocol for monitoring changes in pathological thyroid morphology.
We present a framework for automated US image slice localization within a 3D shape representation.
arXiv Detail & Related papers (2023-09-01T10:10:46Z) - HoloPOCUS: Portable Mixed-Reality 3D Ultrasound Tracking, Reconstruction
and Overlay [2.069072041357411]
HoloPOCUS is a mixed reality US system that overlays rich US information onto the user's vision in a point-of-care setting.
We validated a tracking pipeline that demonstrates higher accuracy compared to existing MR-US works.
arXiv Detail & Related papers (2023-08-26T09:28:20Z) - Multi-View Vertebra Localization and Identification from CT Images [57.56509107412658]
We propose a multi-view vertebra localization and identification from CT images.
We convert the 3D problem into a 2D localization and identification task on different views.
Our method can learn the multi-view global information naturally.
arXiv Detail & Related papers (2023-07-24T14:43:07Z) - Ultra-NeRF: Neural Radiance Fields for Ultrasound Imaging [40.72047687523214]
We present a physics-enhanced implicit neural representation (INR) for ultrasound (US) imaging that learns tissue properties from overlapping US sweeps.
Our proposed method leverages a ray-tracing-based neural rendering for novel view US synthesis.
arXiv Detail & Related papers (2023-01-25T11:02:09Z) - Adaptive 3D Localization of 2D Freehand Ultrasound Brain Images [18.997300579859978]
We propose AdLocUI, a framework that Adaptively Localizes 2D Ultrasound Images in the 3D anatomical atlas.
We first train a convolutional neural network with 2D slices sampled from co-aligned 3D ultrasound volumes to predict their locations.
We fine-tune it with 2D freehand ultrasound images using a novel unsupervised cycle consistency.
arXiv Detail & Related papers (2022-09-12T17:59:41Z) - Agent with Tangent-based Formulation and Anatomical Perception for
Standard Plane Localization in 3D Ultrasound [56.7645826576439]
We introduce a novel reinforcement learning framework for automatic SP localization in 3D US.
First, we formulate SP localization in 3D US as a tangent-point-based problem in RL to restructure the action space.
Second, we design an auxiliary task learning strategy to enhance the model's ability to recognize subtle differences crossing Non-SPs and SPs in plane search.
arXiv Detail & Related papers (2022-07-01T14:53:27Z) - A Novel Augmented Reality Ultrasound Framework Using an RGB-D Camera and
a 3D-printed Marker [0.3061098887924466]
Our goal is to develop a simple and low cost augmented reality echography framework using a standard RGB-D Camera.
Prototype system consisted of an Occipital Structure Core RGB-D camera, a specifically-designed 3D marker, and a fast point cloud registration algorithm FaVoR.
Prototype probe was calibrated on a 3D-printed N-wire phantom using the software PLUS toolkit.
arXiv Detail & Related papers (2022-05-09T14:54:47Z) - Sketch guided and progressive growing GAN for realistic and editable
ultrasound image synthesis [12.32829386817706]
We propose a generative adversarial network (GAN) based image synthesis framework.
We present the first work that can synthesize realistic B-mode US images with high-resolution and customized texture editing features.
In addition, a feature loss is proposed to minimize the difference of high-level features between the generated and real images.
arXiv Detail & Related papers (2022-04-14T12:50:18Z) - 3D-to-2D Distillation for Indoor Scene Parsing [78.36781565047656]
We present a new approach that enables us to leverage 3D features extracted from large-scale 3D data repository to enhance 2D features extracted from RGB images.
First, we distill 3D knowledge from a pretrained 3D network to supervise a 2D network to learn simulated 3D features from 2D features during the training.
Second, we design a two-stage dimension normalization scheme to calibrate the 2D and 3D features for better integration.
Third, we design a semantic-aware adversarial training model to extend our framework for training with unpaired 3D data.
arXiv Detail & Related papers (2021-04-06T02:22:24Z) - Agent with Warm Start and Active Termination for Plane Localization in
3D Ultrasound [56.14006424500334]
Standard plane localization is crucial for ultrasound (US) diagnosis.
In prenatal US, dozens of standard planes are manually acquired with a 2D probe.
We propose a novel reinforcement learning framework to automatically localize fetal brain standard planes in 3D US.
arXiv Detail & Related papers (2019-10-10T02:21:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.