Programmable Control of Ultrasound Swarmbots through Reinforcement
Learning
- URL: http://arxiv.org/abs/2209.15393v1
- Date: Fri, 30 Sep 2022 11:46:12 GMT
- Title: Programmable Control of Ultrasound Swarmbots through Reinforcement
Learning
- Authors: Matthijs Schrage, Mahmoud Medany, and Daniel Ahmed
- Abstract summary: Acoustically driven microrobot navigation based on microbubbles is a promising approach for targeted drug delivery.
We use reinforcement learning control strategies to learn the microrobot dynamics and manipulate them through acoustic forces.
The result demonstrated for the first time autonomous acoustic navigation of microbubbles in a microfluidic environment.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Powered by acoustics, existing therapeutic and diagnostic procedures will
become less invasive and new methods will become available that have never been
available before. Acoustically driven microrobot navigation based on
microbubbles is a promising approach for targeted drug delivery. Previous
studies have used acoustic techniques to manipulate microbubbles in vitro and
in vivo for the delivery of drugs using minimally invasive procedures. Even
though many advanced capabilities and sophisticated control have been achieved
for acoustically powered microrobots, there remain many challenges that remain
to be solved. In order to develop the next generation of intelligent
micro/nanorobots, it is highly desirable to conduct accurate identification of
the micro-nanorobots and to control their dynamic motion autonomously. Here we
use reinforcement learning control strategies to learn the microrobot dynamics
and manipulate them through acoustic forces. The result demonstrated for the
first time autonomous acoustic navigation of microbubbles in a microfluidic
environment. Taking advantage of the benefit of the second radiation force,
microbubbles swarm to form a large swarm, which is then driven along the
desired trajectory. More than 100 thousand images were used for the training to
study the unexpected dynamics of microbubbles. As a result of this work, the
microrobots are validated to be controlled, illustrating a good level of
robustness and providing computational intelligence to the microrobots, which
enables them to navigate independently in an unstructured environment without
requiring outside assistance.
Related papers
- MEMTRACK: A Deep Learning-Based Approach to Microrobot Tracking in Dense
and Low-Contrast Environments [4.638136711579875]
Motion Enhanced Multi-level Tracker (MEMTrack) is a robust pipeline for detecting and tracking microrobots.
We trained and validated our model using bacterial micro-motors in collagen (tissue phantom) and tested it in collagen and aqueous media.
MEMTrack can quantify average bacteria speed with no statistically significant difference from the laboriously-produced manual tracking data.
arXiv Detail & Related papers (2023-10-13T23:21:32Z) - Navigation of micro-robot swarms for targeted delivery using
reinforcement learning [0.0]
We use the Reinforcement Learning (RL) algorithms Proximal Policy Optimization (PPO) and Robust Policy Optimization (RPO) to navigate a swarm of 4, 9 and 16 microswimmers.
We look at both PPO and RPO performances with limited state information scenarios and also test their robustness for random target location and size.
arXiv Detail & Related papers (2023-06-30T12:17:39Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - Active Predicting Coding: Brain-Inspired Reinforcement Learning for
Sparse Reward Robotic Control Problems [79.07468367923619]
We propose a backpropagation-free approach to robotic control through the neuro-cognitive computational framework of neural generative coding (NGC)
We design an agent built completely from powerful predictive coding/processing circuits that facilitate dynamic, online learning from sparse rewards.
We show that our proposed ActPC agent performs well in the face of sparse (extrinsic) reward signals and is competitive with or outperforms several powerful backprop-based RL approaches.
arXiv Detail & Related papers (2022-09-19T16:49:32Z) - Chemotaxis of sea urchin sperm cells through deep reinforcement learning [0.0]
In this work, we investigate how a model of sea urchin sperm cell can self-learn chemotactic motion in a chemoattractant concentration field.
We employ an artificial neural network to act as a decision-making agent and facilitate the sperm cell to discover efficient maneuver strategies.
Our results provide insights to the chemotactic process of sea urchin sperm cells and also prepare guidance for the intelligent maneuver of microrobots.
arXiv Detail & Related papers (2022-08-02T06:04:32Z) - Smart Magnetic Microrobots Learn to Swim with Deep Reinforcement
Learning [0.0]
Deep reinforcement learning is a promising method of autonomously developing robust controllers for creating smart microrobots.
Here, we report the development of a smart helical magnetic hydrogel microrobot that used the soft actor critic reinforcement learning algorithm to autonomously derive a control policy.
The reinforcement learning agent learned successful control policies with fewer than 100,000 training steps, demonstrating sample efficiency for fast learning.
arXiv Detail & Related papers (2022-01-14T18:42:18Z) - Deep neural networks approach to microbial colony detection -- a
comparative analysis [52.77024349608834]
This study investigates the performance of three deep learning approaches for object detection on the AGAR dataset.
The achieved results may serve as a benchmark for future experiments.
arXiv Detail & Related papers (2021-08-23T12:06:00Z) - Autonomous object harvesting using synchronized optoelectronic
microrobots [10.860767733334306]
Optoelectronic tweezer-driven microrobots (OETdMs) are a versatile micromanipulation technology.
We describe an approach to automated targeting and path planning to enable open-loop control of multiple microrobots.
arXiv Detail & Related papers (2021-03-08T17:24:15Z) - Neural Network-based Virtual Microphone Estimator [111.79608275698274]
We propose a neural network-based virtual microphone estimator (NN-VME)
The NN-VME estimates virtual microphone signals directly in the time domain, by utilizing the precise estimation capability of the recent time-domain neural networks.
Experiments on the CHiME-4 corpus show that the proposed NN-VME achieves high virtual microphone estimation performance even for real recordings.
arXiv Detail & Related papers (2021-01-12T06:30:24Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z) - Populations of Spiking Neurons for Reservoir Computing: Closed Loop
Control of a Compliant Quadruped [64.64924554743982]
We present a framework for implementing central pattern generators with spiking neural networks to obtain closed loop robot control.
We demonstrate the learning of predefined gait patterns, speed control and gait transition on a simulated model of a compliant quadrupedal robot.
arXiv Detail & Related papers (2020-04-09T14:32:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.