Vision-based Manipulation of Transparent Plastic Bags in Industrial Setups
- URL: http://arxiv.org/abs/2411.09623v2
- Date: Tue, 19 Nov 2024 10:15:56 GMT
- Title: Vision-based Manipulation of Transparent Plastic Bags in Industrial Setups
- Authors: F. Adetunji, A. Karukayil, P. Samant, S. Shabana, F. Varghese, U. Upadhyay, R. A. Yadav, A. Partridge, E. Pendleton, R. Plant, Y. Petillot, M. Koskinopoulou,
- Abstract summary: This paper addresses the challenges of vision-based manipulation for autonomous cutting and unpacking of transparent plastic bags in industrial setups.
The proposed solution employs advanced Machine Learning algorithms, particularly Convolutional Neural Networks (CNNs)
Tracking algorithms and depth sensing technologies are utilized for 3D spatial awareness during pick and placement.
- Score: 0.37187295985559027
- License:
- Abstract: This paper addresses the challenges of vision-based manipulation for autonomous cutting and unpacking of transparent plastic bags in industrial setups, aligning with the Industry 4.0 paradigm. Industry 4.0, driven by data, connectivity, analytics, and robotics, promises enhanced accessibility and sustainability throughout the value chain. The integration of autonomous systems, including collaborative robots (cobots), into industrial processes is pivotal for efficiency and safety. The proposed solution employs advanced Machine Learning algorithms, particularly Convolutional Neural Networks (CNNs), to identify transparent plastic bags under varying lighting and background conditions. Tracking algorithms and depth sensing technologies are utilized for 3D spatial awareness during pick and placement. The system addresses challenges in grasping and manipulation, considering optimal points, compliance control with vacuum gripping technology, and real-time automation for safe interaction in dynamic environments. The system's successful testing and validation in the lab with the FRANKA robot arm, showcases its potential for widespread industrial applications, while demonstrating effectiveness in automating the unpacking and cutting of transparent plastic bags for an 8-stack bulk-loader based on specific requirements and rigorous testing.
Related papers
- Transforming the Hybrid Cloud for Emerging AI Workloads [81.15269563290326]
This white paper envisions transforming hybrid cloud systems to meet the growing complexity of AI workloads.
The proposed framework addresses critical challenges in energy efficiency, performance, and cost-effectiveness.
This joint initiative aims to establish hybrid clouds as secure, efficient, and sustainable platforms.
arXiv Detail & Related papers (2024-11-20T11:57:43Z) - RAMPA: Robotic Augmented Reality for Machine Programming and Automation [4.963604518596734]
This paper introduces Robotic Augmented Reality for Machine Programming (RAMPA)
RAMPA is a system that utilizes the capabilities of state-of-the-art and commercially available AR headsets, e.g., Meta Quest 3.
Our approach enables in-situ data recording, visualization, and fine-tuning of skill demonstrations directly within the user's physical environment.
arXiv Detail & Related papers (2024-10-17T10:21:28Z) - SKT: Integrating State-Aware Keypoint Trajectories with Vision-Language Models for Robotic Garment Manipulation [82.61572106180705]
This paper presents a unified approach using vision-language models (VLMs) to improve keypoint prediction across various garment categories.
We created a large-scale synthetic dataset using advanced simulation techniques, allowing scalable training without extensive real-world data.
Experimental results indicate that the VLM-based method significantly enhances keypoint detection accuracy and task success rates.
arXiv Detail & Related papers (2024-09-26T17:26:16Z) - RoboGrind: Intuitive and Interactive Surface Treatment with Industrial
Robots [7.7407798321584185]
RoboGrind is an integrated system for the intuitive, interactive automation of surface treatment tasks with industrial robots.
It combines a sophisticated 3D perception pipeline for surface scanning and automatic defect identification.
arXiv Detail & Related papers (2024-02-26T13:01:28Z) - Robotic Handling of Compliant Food Objects by Robust Learning from
Demonstration [79.76009817889397]
We propose a robust learning policy based on Learning from Demonstration (LfD) for robotic grasping of food compliant objects.
We present an LfD learning policy that automatically removes inconsistent demonstrations, and estimates the teacher's intended policy.
The proposed approach has a vast range of potential applications in the aforementioned industry sectors.
arXiv Detail & Related papers (2023-09-22T13:30:26Z) - Towards Packaging Unit Detection for Automated Palletizing Tasks [5.235268087662475]
We propose an approach to this challenging problem that is fully trained on synthetically generated data.
The proposed approach is able to handle sparse and low quality sensor data.
We conduct an extensive evaluation on real-world data with a wide range of different retail products.
arXiv Detail & Related papers (2023-08-11T15:37:38Z) - Virtual Reality via Object Poses and Active Learning: Realizing
Telepresence Robots with Aerial Manipulation Capabilities [39.29763956979895]
This article presents a novel telepresence system for advancing aerial manipulation in dynamic and unstructured environments.
The proposed system not only features a haptic device, but also a virtual reality (VR) interface that provides real-time 3D displays of the robot's workspace.
We show over 70 robust executions of pick-and-place, force application and peg-in-hole tasks with the DLR cable-Suspended Aerial Manipulator (SAM)
arXiv Detail & Related papers (2022-10-18T08:42:30Z) - Bayesian Imitation Learning for End-to-End Mobile Manipulation [80.47771322489422]
Augmenting policies with additional sensor inputs, such as RGB + depth cameras, is a straightforward approach to improving robot perception capabilities.
We show that using the Variational Information Bottleneck to regularize convolutional neural networks improves generalization to held-out domains.
We demonstrate that our method is able to help close the sim-to-real gap and successfully fuse RGB and depth modalities.
arXiv Detail & Related papers (2022-02-15T17:38:30Z) - Elastic Tactile Simulation Towards Tactile-Visual Perception [58.44106915440858]
We propose Elastic Interaction of Particles (EIP) for tactile simulation.
EIP models the tactile sensor as a group of coordinated particles, and the elastic property is applied to regulate the deformation of particles during contact.
We further propose a tactile-visual perception network that enables information fusion between tactile data and visual images.
arXiv Detail & Related papers (2021-08-11T03:49:59Z) - COCOI: Contact-aware Online Context Inference for Generalizable
Non-planar Pushing [87.7257446869134]
General contact-rich manipulation problems are long-standing challenges in robotics.
Deep reinforcement learning has shown great potential in solving robot manipulation tasks.
We propose COCOI, a deep RL method that encodes a context embedding of dynamics properties online.
arXiv Detail & Related papers (2020-11-23T08:20:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.