EXOT: Exit-aware Object Tracker for Safe Robotic Manipulation of Moving
Object
- URL: http://arxiv.org/abs/2306.05262v1
- Date: Thu, 8 Jun 2023 15:03:47 GMT
- Title: EXOT: Exit-aware Object Tracker for Safe Robotic Manipulation of Moving
Object
- Authors: Hyunseo Kim, Hye Jung Yoon, Minji Kim, Dong-Sig Han, and Byoung-Tak
Zhang
- Abstract summary: We propose the EXit-aware Object Tracker (EXOT) on a robot hand camera that recognizes an object's absence during manipulation.
The robot decides whether to proceed by examining the tracker's bounding box output containing the target object.
Our tracker shows 38% higher exit-aware performance than a baseline method.
- Score: 18.17924341716236
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current robotic hand manipulation narrowly operates with objects in
predictable positions in limited environments. Thus, when the location of the
target object deviates severely from the expected location, a robot sometimes
responds in an unexpected way, especially when it operates with a human. For
safe robot operation, we propose the EXit-aware Object Tracker (EXOT) on a
robot hand camera that recognizes an object's absence during manipulation. The
robot decides whether to proceed by examining the tracker's bounding box output
containing the target object. We adopt an out-of-distribution classifier for
more accurate object recognition since trackers can mistrack a background as a
target object. To the best of our knowledge, our method is the first approach
of applying an out-of-distribution classification technique to a tracker
output. We evaluate our method on the first-person video benchmark dataset,
TREK-150, and on the custom dataset, RMOT-223, that we collect from the UR5e
robot. Then we test our tracker on the UR5e robot in real-time with a
conveyor-belt sushi task, to examine the tracker's ability to track target
dishes and to determine the exit status. Our tracker shows 38% higher
exit-aware performance than a baseline method. The dataset and the code will be
released at https://github.com/hskAlena/EXOT.
Related papers
- Learning Object Properties Using Robot Proprioception via Differentiable Robot-Object Interaction [52.12746368727368]
Differentiable simulation has become a powerful tool for system identification.
Our approach calibrates object properties by using information from the robot, without relying on data from the object itself.
We demonstrate the effectiveness of our method on a low-cost robotic platform.
arXiv Detail & Related papers (2024-10-04T20:48:38Z) - Planning Robot Placement for Object Grasping [5.327052729563043]
When performing manipulation-based activities such as picking objects, a mobile robot needs to position its base at a location that supports successful execution.
To address this problem, prominent approaches typically rely on costly grasp planners to provide grasp poses for a target object.
We propose instead to first find robot placements that would not result in collision with the environment, then evaluate them to find the best placement candidate.
arXiv Detail & Related papers (2024-05-26T20:57:32Z) - Track2Act: Predicting Point Tracks from Internet Videos enables Generalizable Robot Manipulation [65.46610405509338]
We seek to learn a generalizable goal-conditioned policy that enables zero-shot robot manipulation.
Our framework,Track2Act predicts tracks of how points in an image should move in future time-steps based on a goal.
We show that this approach of combining scalably learned track prediction with a residual policy enables diverse generalizable robot manipulation.
arXiv Detail & Related papers (2024-05-02T17:56:55Z) - Exploring 3D Human Pose Estimation and Forecasting from the Robot's Perspective: The HARPER Dataset [52.22758311559]
We introduce HARPER, a novel dataset for 3D body pose estimation and forecast in dyadic interactions between users and Spot.
The key-novelty is the focus on the robot's perspective, i.e., on the data captured by the robot's sensors.
The scenario underlying HARPER includes 15 actions, of which 10 involve physical contact between the robot and users.
arXiv Detail & Related papers (2024-03-21T14:53:50Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Robot Person Following in Uniform Crowd Environment [13.708992331117281]
Person-tracking robots have many applications, such as in security, elderly care, and socializing robots.
In this work, we focus on improving the perceptivity of a robot for a person following task by developing a robust and real-time applicable object tracker.
We present a new robot person tracking system with a new RGB-D tracker, Deep Tracking with RGB-D (DTRD) that is resilient to tricky challenges introduced by the uniform crowd environment.
arXiv Detail & Related papers (2022-05-21T10:20:14Z) - Object Manipulation via Visual Target Localization [64.05939029132394]
Training agents to manipulate objects, poses many challenges.
We propose an approach that explores the environment in search for target objects, computes their 3D coordinates once they are located, and then continues to estimate their 3D locations even when the objects are not visible.
Our evaluations show a massive 3x improvement in success rate over a model that has access to the same sensory suite.
arXiv Detail & Related papers (2022-03-15T17:59:01Z) - A System for Traded Control Teleoperation of Manipulation Tasks using
Intent Prediction from Hand Gestures [20.120263332724438]
This paper presents a teleoperation system that includes robot perception and intent prediction from hand gestures.
The perception module identifies the objects present in the robot workspace and the intent prediction module which object the user likely wants to grasp.
arXiv Detail & Related papers (2021-07-05T07:37:17Z) - Object-Independent Human-to-Robot Handovers using Real Time Robotic
Vision [6.089651609511804]
We present an approach for safe and object-independent human-to-robot handovers using real time robotic vision and manipulation.
In experiments with 13 objects, the robot was able to successfully take the object from the human in 81.9% of the trials.
arXiv Detail & Related papers (2020-06-02T17:29:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.