Towards Packaging Unit Detection for Automated Palletizing Tasks
- URL: http://arxiv.org/abs/2308.06306v1
- Date: Fri, 11 Aug 2023 15:37:38 GMT
- Title: Towards Packaging Unit Detection for Automated Palletizing Tasks
- Authors: Markus V\"olk, Kilian Kleeberger, Werner Kraus, Richard Bormann
- Abstract summary: We propose an approach to this challenging problem that is fully trained on synthetically generated data.
The proposed approach is able to handle sparse and low quality sensor data.
We conduct an extensive evaluation on real-world data with a wide range of different retail products.
- Score: 5.235268087662475
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For various automated palletizing tasks, the detection of packaging units is
a crucial step preceding the actual handling of the packaging units by an
industrial robot. We propose an approach to this challenging problem that is
fully trained on synthetically generated data and can be robustly applied to
arbitrary real world packaging units without further training or setup effort.
The proposed approach is able to handle sparse and low quality sensor data, can
exploit prior knowledge if available and generalizes well to a wide range of
products and application scenarios. To demonstrate the practical use of our
approach, we conduct an extensive evaluation on real-world data with a wide
range of different retail products. Further, we integrated our approach in a
lab demonstrator and a commercial solution will be marketed through an
industrial partner.
Related papers
- SKT: Integrating State-Aware Keypoint Trajectories with Vision-Language Models for Robotic Garment Manipulation [82.61572106180705]
This paper presents a unified approach using vision-language models (VLMs) to improve keypoint prediction across various garment categories.
We created a large-scale synthetic dataset using advanced simulation techniques, allowing scalable training without extensive real-world data.
Experimental results indicate that the VLM-based method significantly enhances keypoint detection accuracy and task success rates.
arXiv Detail & Related papers (2024-09-26T17:26:16Z) - Information-driven Affordance Discovery for Efficient Robotic Manipulation [14.863105174430087]
We argue that well-directed interactions with the environment can mitigate this problem.
We provide a theoretical justification of our approach and we empirically validate the approach both in simulation and real-world tasks.
Our method, which we dub IDA, enables the efficient discovery of visual affordances for several action primitives.
arXiv Detail & Related papers (2024-05-06T21:25:51Z) - OpenPack: A Large-scale Dataset for Recognizing Packaging Works in IoT-enabled Logistic Environments [6.2454830041363145]
We introduce a new large-scale dataset for packaging work recognition called OpenPack.
OpenPack contains 53.8 hours of multimodal sensor data, including acceleration data, keypoints, depth images, and readings from IoT-enabled devices.
We apply state-of-the-art human activity recognition techniques to the dataset and provide future directions of complex work activity recognition studies.
arXiv Detail & Related papers (2022-12-10T13:01:18Z) - Deep Learning based pipeline for anomaly detection and quality
enhancement in industrial binder jetting processes [68.8204255655161]
Anomaly detection describes methods of finding abnormal states, instances or data points that differ from a normal value space.
This paper contributes to a data-centric way of approaching artificial intelligence in industrial production.
arXiv Detail & Related papers (2022-09-21T08:14:34Z) - Incremental 3D Scene Completion for Safe and Efficient Exploration
Mapping and Planning [60.599223456298915]
We propose a novel way to integrate deep learning into exploration by leveraging 3D scene completion for informed, safe, and interpretable mapping and planning.
We show that our method can speed up coverage of an environment by 73% compared to the baselines with only minimal reduction in map accuracy.
Even if scene completions are not included in the final map, we show that they can be used to guide the robot to choose more informative paths, speeding up the measurement of the scene with the robot's sensors by 35%.
arXiv Detail & Related papers (2022-08-17T14:19:33Z) - Stronger Generalization Guarantees for Robot Learning by Combining
Generative Models and Real-World Data [5.935761705025763]
We provide a framework for providing generalization guarantees by leveraging a finite dataset of real-world environments.
We demonstrate our approach on two simulated systems with nonlinear/hybrid dynamics and rich sensing modalities.
arXiv Detail & Related papers (2021-11-16T20:13:10Z) - An Image Processing Pipeline for Automated Packaging Structure
Recognition [60.56493342808093]
We propose a cognitive system for the fully automated recognition of packaging structures for standardized logistics shipments based on single RGB images.
Our contribution contains descriptions of a suitable system design and its evaluation on relevant real-world data.
arXiv Detail & Related papers (2020-09-29T07:26:08Z) - Fully-Automated Packaging Structure Recognition in Logistics
Environments [60.56493342808093]
We propose a method for complete automation of packaging structure recognition.
Our algorithm is based on deep learning models, more precisely convolutional neural networks for instance segmentation in images.
We show that the solution is capable of correctly recognizing the packaging structure in approximately 85% of our test cases, and even more (91%) when focusing on most common package types.
arXiv Detail & Related papers (2020-08-11T10:57:23Z) - Guided Uncertainty-Aware Policy Optimization: Combining Learning and
Model-Based Strategies for Sample-Efficient Policy Learning [75.56839075060819]
Traditional robotic approaches rely on an accurate model of the environment, a detailed description of how to perform the task, and a robust perception system to keep track of the current state.
reinforcement learning approaches can operate directly from raw sensory inputs with only a reward signal to describe the task, but are extremely sample-inefficient and brittle.
In this work, we combine the strengths of model-based methods with the flexibility of learning-based methods to obtain a general method that is able to overcome inaccuracies in the robotics perception/actuation pipeline.
arXiv Detail & Related papers (2020-05-21T19:47:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.