Proactive Robot Assistance via Spatio-Temporal Object Modeling
- URL: http://arxiv.org/abs/2211.15501v1
- Date: Mon, 28 Nov 2022 16:20:50 GMT
- Title: Proactive Robot Assistance via Spatio-Temporal Object Modeling
- Authors: Maithili Patel, Sonia Chernova
- Abstract summary: Proactive robot assistance enables a robot to anticipate and provide for a user's needs without being explicitly asked.
We introduce a generative graph neural network to learn object dynamics from temporal sequences of object arrangements.
Our model outperforms the leading baseline in predicting object movement, correctly predicting locations for 11.1% more objects and wrongly predicting locations for 11.5% fewer objects used by the human user.
- Score: 15.785125079811902
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Proactive robot assistance enables a robot to anticipate and provide for a
user's needs without being explicitly asked. We formulate proactive assistance
as the problem of the robot anticipating temporal patterns of object movements
associated with everyday user routines, and proactively assisting the user by
placing objects to adapt the environment to their needs. We introduce a
generative graph neural network to learn a unified spatio-temporal predictive
model of object dynamics from temporal sequences of object arrangements. We
additionally contribute the Household Object Movements from Everyday Routines
(HOMER) dataset, which tracks household objects associated with human
activities of daily living across 50+ days for five simulated households. Our
model outperforms the leading baseline in predicting object movement, correctly
predicting locations for 11.1% more objects and wrongly predicting locations
for 11.5% fewer objects used by the human user.
Related papers
- Learning Object Properties Using Robot Proprioception via Differentiable Robot-Object Interaction [52.12746368727368]
Differentiable simulation has become a powerful tool for system identification.
Our approach calibrates object properties by using information from the robot, without relying on data from the object itself.
We demonstrate the effectiveness of our method on a low-cost robotic platform.
arXiv Detail & Related papers (2024-10-04T20:48:38Z) - A Framework for Realistic Simulation of Daily Human Activity [1.8877825068318652]
This paper presents a framework for simulating daily human activity patterns in home environments at scale.
We introduce a method for specifying day-to-day variation in schedules and present a bidirectional constraint propagation algorithm for generating schedules from templates.
arXiv Detail & Related papers (2023-11-26T19:50:23Z) - Task-Oriented Human-Object Interactions Generation with Implicit Neural
Representations [61.659439423703155]
TOHO: Task-Oriented Human-Object Interactions Generation with Implicit Neural Representations.
Our method generates continuous motions that are parameterized only by the temporal coordinate.
This work takes a step further toward general human-scene interaction simulation.
arXiv Detail & Related papers (2023-03-23T09:31:56Z) - Learn to Predict How Humans Manipulate Large-sized Objects from
Interactive Motions [82.90906153293585]
We propose a graph neural network, HO-GCN, to fuse motion data and dynamic descriptors for the prediction task.
We show the proposed network that consumes dynamic descriptors can achieve state-of-the-art prediction results and help the network better generalize to unseen objects.
arXiv Detail & Related papers (2022-06-25T09:55:39Z) - Property-Aware Robot Object Manipulation: a Generative Approach [57.70237375696411]
In this work, we focus on how to generate robot motion adapted to the hidden properties of the manipulated objects.
We explore the possibility of leveraging Generative Adversarial Networks to synthesize new actions coherent with the properties of the object.
Our results show that Generative Adversarial Nets can be a powerful tool for the generation of novel and meaningful transportation actions.
arXiv Detail & Related papers (2021-06-08T14:15:36Z) - Object Detection and Pose Estimation from RGB and Depth Data for
Real-time, Adaptive Robotic Grasping [0.0]
We propose a system that performs real-time object detection and pose estimation, for the purpose of dynamic robot grasping.
The proposed approach allows the robot to detect the object identity and its actual pose, and then adapt a canonical grasp in order to be used with the new pose.
For training, the system defines a canonical grasp by capturing the relative pose of an object with respect to the gripper attached to the robot's wrist.
During testing, once a new pose is detected, a canonical grasp for the object is identified and then dynamically adapted by adjusting the robot arm's joint angles.
arXiv Detail & Related papers (2021-01-18T22:22:47Z) - Reactive Human-to-Robot Handovers of Arbitrary Objects [57.845894608577495]
We present a vision-based system that enables human-to-robot handovers of unknown objects.
Our approach combines closed-loop motion planning with real-time, temporally-consistent grasp generation.
We demonstrate the generalizability, usability, and robustness of our approach on a novel benchmark set of 26 diverse household objects.
arXiv Detail & Related papers (2020-11-17T21:52:22Z) - Human Grasp Classification for Reactive Human-to-Robot Handovers [50.91803283297065]
We propose an approach for human-to-robot handovers in which the robot meets the human halfway.
We collect a human grasp dataset which covers typical ways of holding objects with various hand shapes and poses.
We present a planning and execution approach that takes the object from the human hand according to the detected grasp and hand position.
arXiv Detail & Related papers (2020-03-12T19:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.