Vision-based robot manipulation of transparent liquid containers in a laboratory setting
- URL: http://arxiv.org/abs/2404.16529v1
- Date: Thu, 25 Apr 2024 11:42:32 GMT
- Title: Vision-based robot manipulation of transparent liquid containers in a laboratory setting
- Authors: Daniel Schober, Ronja Güldenring, James Love, Lazaros Nalpantidis,
- Abstract summary: We develop a vision-based system for liquid volume estimation and a simulation-driven pouring method.
We evaluate both components individually, followed by an applied real-world integration of cell culture automation using a UR5 robotic arm.
- Score: 3.443622476405787
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Laboratory processes involving small volumes of solutions and active ingredients are often performed manually due to challenges in automation, such as high initial costs, semi-structured environments and protocol variability. In this work, we develop a flexible and cost-effective approach to address this gap by introducing a vision-based system for liquid volume estimation and a simulation-driven pouring method particularly designed for containers with small openings. We evaluate both components individually, followed by an applied real-world integration of cell culture automation using a UR5 robotic arm. Our work is fully reproducible: we share our code at at \url{https://github.com/DaniSchober/LabLiquidVision} and the newly introduced dataset LabLiquidVolume is available at https://data.dtu.dk/articles/dataset/LabLiquidVision/25103102.
Related papers
- TraceVLA: Visual Trace Prompting Enhances Spatial-Temporal Awareness for Generalist Robotic Policies [95.30717188630432]
We introduce visual trace prompting to facilitate VLA models' spatial-temporal awareness for action prediction.
We develop a new TraceVLA model by finetuning OpenVLA on our own collected dataset of 150K robot manipulation trajectories.
We present a compact VLA model based on 4B Phi-3-Vision, pretrained on the Open-X-Embodiment and finetuned on our dataset.
arXiv Detail & Related papers (2024-12-13T18:40:51Z) - LucidGrasp: Robotic Framework for Autonomous Manipulation of Laboratory Equipment with Different Degrees of Transparency via 6D Pose Estimation [8.961549735358213]
This work includes the development of a robotic framework with autonomous mode for manipulating liquid-filled objects.
The proposed robotic framework can be applied for laboratory automation, since it allows solving the problem of performing non-trivial manipulation tasks.
arXiv Detail & Related papers (2024-10-10T10:40:42Z) - AlabOS: A Python-based Reconfigurable Workflow Management Framework for Autonomous Laboratories [3.8330070166920556]
We introduce AlabOS, a general-purpose software framework for orchestrating experiments and managing resources.
AlabOS features a reconfigurable experiment workflow model and a resource reservation mechanism, enabling the simultaneous execution of varied tasks.
We demonstrate the implementation of AlabOS in a prototype autonomous materials laboratory, A-Lab, with around 3,500 samples synthesized over 1.5 years.
arXiv Detail & Related papers (2024-05-22T18:59:39Z) - Robot Fleet Learning via Policy Merging [58.5086287737653]
We propose FLEET-MERGE to efficiently merge policies in the fleet setting.
We show that FLEET-MERGE consolidates the behavior of policies trained on 50 tasks in the Meta-World environment.
We introduce a novel robotic tool-use benchmark, FLEET-TOOLS, for fleet policy learning in compositional and contact-rich robot manipulation tasks.
arXiv Detail & Related papers (2023-10-02T17:23:51Z) - Closing the loop: Autonomous experiments enabled by
machine-learning-based online data analysis in synchrotron beamline
environments [80.49514665620008]
Machine learning can be used to enhance research involving large or rapidly generated datasets.
In this study, we describe the incorporation of ML into a closed-loop workflow for X-ray reflectometry (XRR)
We present solutions that provide an elementary data analysis in real time during the experiment without introducing the additional software dependencies in the beamline control software environment.
arXiv Detail & Related papers (2023-06-20T21:21:19Z) - FluidLab: A Differentiable Environment for Benchmarking Complex Fluid
Manipulation [80.63838153351804]
We introduce FluidLab, a simulation environment with a diverse set of manipulation tasks involving complex fluid dynamics.
At the heart of our platform is a fully differentiable physics simulator, providing GPU-accelerated simulations and gradient calculations.
We propose several domain-specific optimization schemes coupled with differentiable physics.
arXiv Detail & Related papers (2023-03-04T07:24:22Z) - Learn Fast, Segment Well: Fast Object Segmentation Learning on the iCub
Robot [20.813028212068424]
We study different techniques that allow adapting an object segmentation model in presence of novel objects or different domains.
We propose a pipeline for fast instance segmentation learning for robotic applications where data come in stream.
We benchmark the proposed pipeline on two datasets and we deploy it on a real robot, iCub humanoid.
arXiv Detail & Related papers (2022-06-27T17:14:04Z) - Visual-tactile sensing for Real-time liquid Volume Estimation in
Grasping [58.50342759993186]
We propose a visuo-tactile model for realtime estimation of the liquid inside a deformable container.
We fuse two sensory modalities, i.e., the raw visual inputs from the RGB camera and the tactile cues from our specific tactile sensor.
The robotic system is well controlled and adjusted based on the estimation model in real time.
arXiv Detail & Related papers (2022-02-23T13:38:31Z) - SAGCI-System: Towards Sample-Efficient, Generalizable, Compositional,
and Incremental Robot Learning [41.19148076789516]
We introduce a systematic learning framework called SAGCI-system towards achieving the above four requirements.
Our system first takes the raw point clouds gathered by the camera mounted on the robot's wrist as the inputs and produces initial modeling of the surrounding environment represented as a URDF.
The robot then utilizes the interactive perception to interact with the environments to online verify and modify the URDF.
arXiv Detail & Related papers (2021-11-29T16:53:49Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z) - Integrated Benchmarking and Design for Reproducible and Accessible
Evaluation of Robotic Agents [61.36681529571202]
We describe a new concept for reproducible robotics research that integrates development and benchmarking.
One of the central components of this setup is the Duckietown Autolab, a standardized setup that is itself relatively low-cost and reproducible.
We validate the system by analyzing the repeatability of experiments conducted using the infrastructure and show that there is low variance across different robot hardware and across different remote labs.
arXiv Detail & Related papers (2020-09-09T15:31:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.