Rearrange Indoor Scenes for Human-Robot Co-Activity
- URL: http://arxiv.org/abs/2303.05676v1
- Date: Fri, 10 Mar 2023 03:03:32 GMT
- Title: Rearrange Indoor Scenes for Human-Robot Co-Activity
- Authors: Weiqi Wang, Zihang Zhao, Ziyuan Jiao, Yixin Zhu, Song-Chun Zhu,
Hangxin Liu
- Abstract summary: We present an optimization-based framework for rearranging indoor furniture to accommodate human-robot co-activities better.
Our algorithm preserves the functional relations among furniture by integrating spatial and semantic co-occurrence extracted from SUNCG and ConceptNet.
Our experiments show that rearranged scenes provide an average of 14% more accessible space and 30% more objects to interact with.
- Score: 82.22847163761969
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an optimization-based framework for rearranging indoor furniture
to accommodate human-robot co-activities better. The rearrangement aims to
afford sufficient accessible space for robot activities without compromising
everyday human activities. To retain human activities, our algorithm preserves
the functional relations among furniture by integrating spatial and semantic
co-occurrence extracted from SUNCG and ConceptNet, respectively. By defining
the robot's accessible space by the amount of open space it can traverse and
the number of objects it can reach, we formulate the rearrangement for
human-robot co-activity as an optimization problem, solved by adaptive
simulated annealing (ASA) and covariance matrix adaptation evolution strategy
(CMA-ES). Our experiments on the SUNCG dataset quantitatively show that
rearranged scenes provide an average of 14% more accessible space and 30% more
objects to interact with. The quality of the rearranged scenes is qualitatively
validated by a human study, indicating the efficacy of the proposed strategy.
Related papers
- Visual-Geometric Collaborative Guidance for Affordance Learning [63.038406948791454]
We propose a visual-geometric collaborative guided affordance learning network that incorporates visual and geometric cues.
Our method outperforms the representative models regarding objective metrics and visual quality.
arXiv Detail & Related papers (2024-10-15T07:35:51Z) - Large Language Model-based Human-Agent Collaboration for Complex Task
Solving [94.3914058341565]
We introduce the problem of Large Language Models (LLMs)-based human-agent collaboration for complex task-solving.
We propose a Reinforcement Learning-based Human-Agent Collaboration method, ReHAC.
This approach includes a policy model designed to determine the most opportune stages for human intervention within the task-solving process.
arXiv Detail & Related papers (2024-02-20T11:03:36Z) - A Framework for Realistic Simulation of Daily Human Activity [1.8877825068318652]
This paper presents a framework for simulating daily human activity patterns in home environments at scale.
We introduce a method for specifying day-to-day variation in schedules and present a bidirectional constraint propagation algorithm for generating schedules from templates.
arXiv Detail & Related papers (2023-11-26T19:50:23Z) - Regularized Deep Signed Distance Fields for Reactive Motion Generation [30.792481441975585]
Distance-based constraints are fundamental for enabling robots to plan their actions and act safely.
We propose Regularized Deep Signed Distance Fields (ReDSDF), a single neural implicit function that can compute smooth distance fields at any scale.
We demonstrate the effectiveness of our approach in representative simulated tasks for whole-body control (WBC) and safe Human-Robot Interaction (HRI) in shared workspaces.
arXiv Detail & Related papers (2022-03-09T14:21:32Z) - Nonprehensile Riemannian Motion Predictive Control [57.295751294224765]
We introduce a novel Real-to-Sim reward analysis technique to reliably imagine and predict the outcome of taking possible actions for a real robotic platform.
We produce a closed-loop controller to reactively push objects in a continuous action space.
We observe that RMPC is robust in cluttered as well as occluded environments and outperforms the baselines.
arXiv Detail & Related papers (2021-11-15T18:50:04Z) - HARPS: An Online POMDP Framework for Human-Assisted Robotic Planning and
Sensing [1.3678064890824186]
The Human Assisted Robotic Planning and Sensing (HARPS) framework is presented for active semantic sensing and planning in human-robot teams.
This approach lets humans opportunistically impose model structure and extend the range of semantic soft data in uncertain environments.
Simulations of a UAV-enabled target search application in a large-scale partially structured environment show significant improvements in time and belief state estimates.
arXiv Detail & Related papers (2021-10-20T00:41:57Z) - Ergonomically Intelligent Physical Human-Robot Interaction: Postural
Estimation, Assessment, and Optimization [3.681892767755111]
We show that we can estimate human posture solely from the trajectory of the interacting robot.
We propose DULA, a differentiable ergonomics model, and use it in gradient-free postural optimization for physical human-robot interaction tasks.
arXiv Detail & Related papers (2021-08-12T21:13:06Z) - RobustFusion: Robust Volumetric Performance Reconstruction under
Human-object Interactions from Monocular RGBD Stream [27.600873320989276]
High-quality 4D reconstruction of human performance with complex interactions to various objects is essential in real-world scenarios.
Recent advances still fail to provide reliable performance reconstruction.
We propose RobustFusion, a robust volumetric performance reconstruction system for human-object interaction scenarios.
arXiv Detail & Related papers (2021-04-30T08:41:45Z) - Show Me What You Can Do: Capability Calibration on Reachable Workspace
for Human-Robot Collaboration [83.4081612443128]
We show that a short calibration using REMP can effectively bridge the gap between what a non-expert user thinks a robot can reach and the ground-truth.
We show that this calibration procedure not only results in better user perception, but also promotes more efficient human-robot collaborations.
arXiv Detail & Related papers (2021-03-06T09:14:30Z) - Learning to Plan Optimistically: Uncertainty-Guided Deep Exploration via
Latent Model Ensembles [73.15950858151594]
This paper presents Latent Optimistic Value Exploration (LOVE), a strategy that enables deep exploration through optimism in the face of uncertain long-term rewards.
We combine latent world models with value function estimation to predict infinite-horizon returns and recover associated uncertainty via ensembling.
We apply LOVE to visual robot control tasks in continuous action spaces and demonstrate on average more than 20% improved sample efficiency in comparison to state-of-the-art and other exploration objectives.
arXiv Detail & Related papers (2020-10-27T22:06:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.