Robotic Waste Sorter with Agile Manipulation and Quickly Trainable
Detector
- URL: http://arxiv.org/abs/2104.01260v1
- Date: Fri, 2 Apr 2021 22:19:34 GMT
- Title: Robotic Waste Sorter with Agile Manipulation and Quickly Trainable
Detector
- Authors: Takuya Kiyokawa, Hiroki Katayama, Yuya Tatsuta, Jun Takamatsu, Tsukasa
Ogasawara
- Abstract summary: The goal of automating the waste-sorting is to replace the human role of robust detection and agile manipulation of the waste items by robots.
First, we propose a combined manipulation method using graspless push-and-drop and pick-and-release manipulation.
Second, we propose a robotic system that can automatically collect object images to quickly train a deep neural network model.
- Score: 3.3073775218038883
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Owing to human labor shortages, the automation of labor-intensive manual
waste-sorting is needed. The goal of automating the waste-sorting is to replace
the human role of robust detection and agile manipulation of the waste items by
robots. To achieve this, we propose three methods. First, we propose a combined
manipulation method using graspless push-and-drop and pick-and-release
manipulation. Second, we propose a robotic system that can automatically
collect object images to quickly train a deep neural network model. Third, we
propose the method to mitigate the differences in the appearance of target
objects from two scenes: one for the dataset collection and the other for waste
sorting in a recycling factory. If differences exist, the performance of a
trained waste detector could be decreased. We address differences in
illumination and background by applying object scaling, histogram matching with
histogram equalization, and background synthesis to the source target-object
images. Via experiments in an indoor experimental workplace for waste-sorting,
we confirmed the proposed methods enable quickly collecting the training image
sets for three classes of waste items, i.e., aluminum can, glass bottle, and
plastic bottle and detecting them with higher performance than the methods that
do not consider the differences. We also confirmed that the proposed method
enables the robot quickly manipulate them.
Related papers
- Good Grasps Only: A data engine for self-supervised fine-tuning of pose estimation using grasp poses for verification [0.0]
We present a novel method for self-supervised fine-tuning of pose estimation for bin-picking.
Our approach enables the robot to automatically obtain training data without manual labeling.
Our pipeline allows the system to fine-tune while the process is running, removing the need for a learning phase.
arXiv Detail & Related papers (2024-09-17T19:26:21Z) - Affordance-based Robot Manipulation with Flow Matching [6.863932324631107]
Our framework unifies affordance model learning and trajectory generation with flow matching for robot manipulation.
Our evaluation highlights that the proposed prompt tuning method for learning manipulation affordance with language prompter achieves competitive performance.
Our framework seamlessly unifies affordance model learning and trajectory generation with flow matching for robot manipulation.
arXiv Detail & Related papers (2024-09-02T09:11:28Z) - Track2Act: Predicting Point Tracks from Internet Videos enables Generalizable Robot Manipulation [65.46610405509338]
We seek to learn a generalizable goal-conditioned policy that enables zero-shot robot manipulation.
Our framework,Track2Act predicts tracks of how points in an image should move in future time-steps based on a goal.
We show that this approach of combining scalably learned track prediction with a residual policy enables diverse generalizable robot manipulation.
arXiv Detail & Related papers (2024-05-02T17:56:55Z) - DITTO: Demonstration Imitation by Trajectory Transformation [31.930923345163087]
In this work, we address the problem of one-shot imitation from a single human demonstration, given by an RGB-D video recording.
We propose a two-stage process. In the first stage we extract the demonstration trajectory offline. This entails segmenting manipulated objects and determining their relative motion in relation to secondary objects such as containers.
In the online trajectory generation stage, we first re-detect all objects, then warp the demonstration trajectory to the current scene and execute it on the robot.
arXiv Detail & Related papers (2024-03-22T13:46:51Z) - Tactile-Filter: Interactive Tactile Perception for Part Mating [54.46221808805662]
Humans rely on touch and tactile sensing for a lot of dexterous manipulation tasks.
vision-based tactile sensors are being widely used for various robotic perception and control tasks.
We present a method for interactive perception using vision-based tactile sensors for a part mating task.
arXiv Detail & Related papers (2023-03-10T16:27:37Z) - Automatically Prepare Training Data for YOLO Using Robotic In-Hand
Observation and Synthesis [14.034128227585143]
We propose combining robotic in-hand observation and data synthesis to enlarge the limited data set collected by the robot.
The collected and synthetic images are combined to train a deep detection neural network.
The results showed that combined observation and synthetic images led to comparable performance to manual data preparation.
arXiv Detail & Related papers (2023-01-04T04:20:08Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Bottom-Up Skill Discovery from Unsegmented Demonstrations for
Long-Horizon Robot Manipulation [55.31301153979621]
We tackle real-world long-horizon robot manipulation tasks through skill discovery.
We present a bottom-up approach to learning a library of reusable skills from unsegmented demonstrations.
Our method has shown superior performance over state-of-the-art imitation learning methods in multi-stage manipulation tasks.
arXiv Detail & Related papers (2021-09-28T16:18:54Z) - Learning to Shift Attention for Motion Generation [55.61994201686024]
One challenge of motion generation using robot learning from demonstration techniques is that human demonstrations follow a distribution with multiple modes for one task query.
Previous approaches fail to capture all modes or tend to average modes of the demonstrations and thus generate invalid trajectories.
We propose a motion generation model with extrapolation ability to overcome this problem.
arXiv Detail & Related papers (2021-02-24T09:07:52Z) - Scalable Multi-Task Imitation Learning with Autonomous Improvement [159.9406205002599]
We build an imitation learning system that can continuously improve through autonomous data collection.
We leverage the robot's own trials as demonstrations for tasks other than the one that the robot actually attempted.
In contrast to prior imitation learning approaches, our method can autonomously collect data with sparse supervision for continuous improvement.
arXiv Detail & Related papers (2020-02-25T18:56:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.