Value of Assistance for Grasping
- URL: http://arxiv.org/abs/2310.14402v2
- Date: Mon, 18 Mar 2024 03:35:28 GMT
- Title: Value of Assistance for Grasping
- Authors: Mohammad Masarwy, Yuval Goshen, David Dovrat, Sarah Keren,
- Abstract summary: We provide a measure for assessing the expected effect a specific observation will have on the robot's ability to complete its task.
We evaluate our suggested measure in simulated and real-world collaborative grasping settings.
- Score: 6.452975320319021
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In multiple realistic settings, a robot is tasked with grasping an object without knowing its exact pose and relies on a probabilistic estimation of the pose to decide how to attempt the grasp. We support settings in which it is possible to provide the robot with an observation of the object before a grasp is attempted but this possibility is limited and there is a need to decide which sensing action would be most beneficial. We support this decision by offering a novel Value of Assistance (VOA) measure for assessing the expected effect a specific observation will have on the robot's ability to complete its task. We evaluate our suggested measure in simulated and real-world collaborative grasping settings.
Related papers
- Planning Robot Placement for Object Grasping [5.327052729563043]
When performing manipulation-based activities such as picking objects, a mobile robot needs to position its base at a location that supports successful execution.
To address this problem, prominent approaches typically rely on costly grasp planners to provide grasp poses for a target object.
We propose instead to first find robot placements that would not result in collision with the environment, then evaluate them to find the best placement candidate.
arXiv Detail & Related papers (2024-05-26T20:57:32Z) - 3D Pose Nowcasting: Forecast the Future to Improve the Present [65.65178700528747]
We propose a novel vision-based system leveraging depth data to accurately establish the 3D locations of skeleton joints.
We introduce the concept of Pose Nowcasting, denoting the capability of the proposed system to enhance its current pose estimation accuracy.
The experimental evaluation is conducted on two different datasets, providing accurate and real-time performance.
arXiv Detail & Related papers (2023-08-24T16:40:47Z) - Value of Assistance for Mobile Agents [7.922832585855347]
Mobile robotic agents often suffer from localization uncertainty which grows with time and with the agents' movement.
In some settings, it may be possible to perform assistive actions that reduce uncertainty about a robot's location.
We propose Value of Assistance (VOA) to represent the expected cost reduction that assistance will yield at a given point of execution.
arXiv Detail & Related papers (2023-08-23T07:02:57Z) - H-SAUR: Hypothesize, Simulate, Act, Update, and Repeat for Understanding
Object Articulations from Interactions [62.510951695174604]
"Hypothesize, Simulate, Act, Update, and Repeat" (H-SAUR) is a probabilistic generative framework that generates hypotheses about how objects articulate given input observations.
We show that the proposed model significantly outperforms the current state-of-the-art articulated object manipulation framework.
We further improve the test-time efficiency of H-SAUR by integrating a learned prior from learning-based vision models.
arXiv Detail & Related papers (2022-10-22T18:39:33Z) - Intention estimation from gaze and motion features for human-robot
shared-control object manipulation [1.128708201885454]
Shared control can help in teleoperated object manipulation by assisting with the execution of the user's intention.
An intention estimation framework is presented, which uses natural gaze and motion features to predict the current action and the target object.
arXiv Detail & Related papers (2022-08-18T07:53:19Z) - Can Foundation Models Perform Zero-Shot Task Specification For Robot
Manipulation? [54.442692221567796]
Task specification is critical for engagement of non-expert end-users and adoption of personalized robots.
A widely studied approach to task specification is through goals, using either compact state vectors or goal images from the same robot scene.
In this work, we explore alternate and more general forms of goal specification that are expected to be easier for humans to specify and use.
arXiv Detail & Related papers (2022-04-23T19:39:49Z) - Probabilistic Human Motion Prediction via A Bayesian Neural Network [71.16277790708529]
We propose a probabilistic model for human motion prediction in this paper.
Our model could generate several future motions when given an observed motion sequence.
We extensively validate our approach on a large scale benchmark dataset Human3.6m.
arXiv Detail & Related papers (2021-07-14T09:05:33Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv Detail & Related papers (2021-05-28T09:23:05Z) - Careful with That! Observation of Human Movements to Estimate Objects
Properties [106.925705883949]
We focus on the features of human motor actions that communicate insights on the weight of an object.
Our final goal is to enable a robot to autonomously infer the degree of care required in object handling.
arXiv Detail & Related papers (2021-03-02T08:14:56Z) - Object Detection and Pose Estimation from RGB and Depth Data for
Real-time, Adaptive Robotic Grasping [0.0]
We propose a system that performs real-time object detection and pose estimation, for the purpose of dynamic robot grasping.
The proposed approach allows the robot to detect the object identity and its actual pose, and then adapt a canonical grasp in order to be used with the new pose.
For training, the system defines a canonical grasp by capturing the relative pose of an object with respect to the gripper attached to the robot's wrist.
During testing, once a new pose is detected, a canonical grasp for the object is identified and then dynamically adapted by adjusting the robot arm's joint angles.
arXiv Detail & Related papers (2021-01-18T22:22:47Z) - Safe and Effective Picking Paths in Clutter given Discrete Distributions
of Object Poses [16.001980921287704]
One approach is to perform object pose estimation and use the most likely candidate pose per object to pick the target without collisions.
This work proposes first a perception process for 6D pose estimation, which returns a discrete distribution of object poses in a scene.
Then, an open-loop planning pipeline is proposed to return safe and effective solutions for moving a robotic arm to pick.
arXiv Detail & Related papers (2020-08-11T00:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.