VesNet-RL: Simulation-based Reinforcement Learning for Real-World US
Probe Navigation
- URL: http://arxiv.org/abs/2205.06676v1
- Date: Tue, 10 May 2022 09:34:42 GMT
- Title: VesNet-RL: Simulation-based Reinforcement Learning for Real-World US
Probe Navigation
- Authors: Yuan Bi, Zhongliang Jiang, Yuan Gao, Thomas Wendler, Angelos Karlas,
and Nassir Navab
- Abstract summary: In freehand US examinations, sonographers often navigate a US probe to visualize standard examination planes with rich diagnostic information.
We propose a simulation-based RL framework for real-world navigation of US probes towards the standard longitudinal views of vessels.
- Score: 39.7566010845081
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ultrasound (US) is one of the most common medical imaging modalities since it
is radiation-free, low-cost, and real-time. In freehand US examinations,
sonographers often navigate a US probe to visualize standard examination planes
with rich diagnostic information. However, reproducibility and stability of the
resulting images often suffer from intra- and inter-operator variation.
Reinforcement learning (RL), as an interaction-based learning method, has
demonstrated its effectiveness in visual navigating tasks; however, RL is
limited in terms of generalization. To address this challenge, we propose a
simulation-based RL framework for real-world navigation of US probes towards
the standard longitudinal views of vessels. A UNet is used to provide binary
masks from US images; thereby, the RL agent trained on simulated binary vessel
images can be applied in real scenarios without further training. To accurately
characterize actual states, a multi-modality state representation structure is
introduced to facilitate the understanding of environments. Moreover,
considering the characteristics of vessels, a novel standard view recognition
approach based on the minimum bounding rectangle is proposed to terminate the
searching process. To evaluate the effectiveness of the proposed method, the
trained policy is validated virtually on 3D volumes of a volunteer's in-vivo
carotid artery, and physically on custom-designed gel phantoms using robotic
US. The results demonstrate that proposed approach can effectively and
accurately navigate the probe towards the longitudinal view of vessels.
Related papers
- Cardiac ultrasound simulation for autonomous ultrasound navigation [4.036497185262817]
We propose a method to generate large amounts of ultrasound images from other modalities and from arbitrary positions.
We present a novel simulation pipeline which uses segmentations from other modalities, an optimized data representation and GPU-accelerated Monte Carlo path tracing.
The proposed approach allows for fast and accurate patient-specific ultrasound image generation, and its usability for training networks for navigation-related tasks is demonstrated.
arXiv Detail & Related papers (2024-02-09T15:14:48Z) - Fast-Slow Test-Time Adaptation for Online Vision-and-Language Navigation [67.18144414660681]
We propose a Fast-Slow Test-Time Adaptation (FSTTA) approach for online Vision-and-Language Navigation (VLN)
Our method obtains impressive performance gains on four popular benchmarks.
arXiv Detail & Related papers (2023-11-22T07:47:39Z) - HoloPOCUS: Portable Mixed-Reality 3D Ultrasound Tracking, Reconstruction
and Overlay [2.069072041357411]
HoloPOCUS is a mixed reality US system that overlays rich US information onto the user's vision in a point-of-care setting.
We validated a tracking pipeline that demonstrates higher accuracy compared to existing MR-US works.
arXiv Detail & Related papers (2023-08-26T09:28:20Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - Agent with Tangent-based Formulation and Anatomical Perception for
Standard Plane Localization in 3D Ultrasound [56.7645826576439]
We introduce a novel reinforcement learning framework for automatic SP localization in 3D US.
First, we formulate SP localization in 3D US as a tangent-point-based problem in RL to restructure the action space.
Second, we design an auxiliary task learning strategy to enhance the model's ability to recognize subtle differences crossing Non-SPs and SPs in plane search.
arXiv Detail & Related papers (2022-07-01T14:53:27Z) - CheXstray: Real-time Multi-Modal Data Concordance for Drift Detection in
Medical Imaging AI [1.359138408203412]
We build and test a medical imaging AI drift monitoring workflow that tracks data and model drift without contemporaneous ground truth.
Key contributions include (1) proof-of-concept for medical imaging drift detection including use of VAE and domain specific statistical methods.
This work has important implications for addressing the translation gap related to continuous medical imaging AI model monitoring in dynamic healthcare environments.
arXiv Detail & Related papers (2022-02-06T18:58:35Z) - Image-Guided Navigation of a Robotic Ultrasound Probe for Autonomous
Spinal Sonography Using a Shadow-aware Dual-Agent Framework [35.17207004351791]
We propose a novel dual-agent framework that integrates a reinforcement learning agent and a deep learning agent.
Our method can effectively interpret the US images and navigate the probe to acquire multiple standard views of the spine.
arXiv Detail & Related papers (2021-11-03T12:11:27Z) - Autonomous Navigation of an Ultrasound Probe Towards Standard Scan
Planes with Deep Reinforcement Learning [28.17246919349759]
We propose a framework to autonomously control the 6-D pose of a virtual US probe based on real-time image feedback.
We validate our method in a simulation environment built with real-world data collected in the US imaging of the spine.
arXiv Detail & Related papers (2021-03-01T03:09:17Z) - Offline Reinforcement Learning from Images with Latent Space Models [60.69745540036375]
offline reinforcement learning (RL) refers to the problem of learning policies from a static dataset of environment interactions.
We build on recent advances in model-based algorithms for offline RL, and extend them to high-dimensional visual observation spaces.
Our approach is both tractable in practice and corresponds to maximizing a lower bound of the ELBO in the unknown POMDP.
arXiv Detail & Related papers (2020-12-21T18:28:17Z) - RL-CycleGAN: Reinforcement Learning Aware Simulation-To-Real [74.45688231140689]
We introduce the RL-scene consistency loss for image translation, which ensures that the translation operation is invariant with respect to the Q-values associated with the image.
We obtain RL-CycleGAN, a new approach for simulation-to-real-world transfer for reinforcement learning.
arXiv Detail & Related papers (2020-06-16T08:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.