What we see and What we don't see: Imputing Occluded Crowd Structures
from Robot Sensing
- URL: http://arxiv.org/abs/2109.08494v1
- Date: Fri, 17 Sep 2021 12:12:13 GMT
- Title: What we see and What we don't see: Imputing Occluded Crowd Structures
from Robot Sensing
- Authors: Javad Amirian, Jean-Bernard Hayet, Julien Pettre
- Abstract summary: We address the problem of inferring the human occupancy in the space around the robot, in blind spots, beyond the range of its sensing capabilities.
This problem is rather unexplored in spite of the important impact it has on the robot crowd navigation efficiency and safety.
- Score: 7.6272993984699635
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the navigation of mobile robots in crowded environments, for
which onboard sensing of the crowd is typically limited by occlusions. We
address the problem of inferring the human occupancy in the space around the
robot, in blind spots, beyond the range of its sensing capabilities. This
problem is rather unexplored in spite of the important impact it has on the
robot crowd navigation efficiency and safety, which requires the estimation and
the prediction of the crowd state around it. In this work, we propose the first
solution to sample predictions of possible human presence based on the state of
a fewer set of sensed people around the robot as well as previous observations
of the crowd activity.
Related papers
- CoNav: A Benchmark for Human-Centered Collaborative Navigation [66.6268966718022]
We propose a collaborative navigation (CoNav) benchmark.
Our CoNav tackles the critical challenge of constructing a 3D navigation environment with realistic and diverse human activities.
We propose an intention-aware agent for reasoning both long-term and short-term human intention.
arXiv Detail & Related papers (2024-06-04T15:44:25Z) - Robots That Can See: Leveraging Human Pose for Trajectory Prediction [30.919756497223343]
We present a Transformer based architecture to predict human future trajectories in human-centric environments.
The resulting model captures the inherent uncertainty for future human trajectory prediction.
We identify new agents with limited historical data as a major contributor to error and demonstrate the complementary nature of 3D skeletal poses in reducing prediction error.
arXiv Detail & Related papers (2023-09-29T13:02:56Z) - SACSoN: Scalable Autonomous Control for Social Navigation [62.59274275261392]
We develop methods for training policies for socially unobtrusive navigation.
By minimizing this counterfactual perturbation, we can induce robots to behave in ways that do not alter the natural behavior of humans in the shared space.
We collect a large dataset where an indoor mobile robot interacts with human bystanders.
arXiv Detail & Related papers (2023-06-02T19:07:52Z) - Aligning Robot and Human Representations [50.070982136315784]
We argue that current representation learning approaches in robotics should be studied from the perspective of how well they accomplish the objective of representation alignment.
We mathematically define the problem, identify its key desiderata, and situate current methods within this formalism.
arXiv Detail & Related papers (2023-02-03T18:59:55Z) - Dense Crowd Flow-Informed Path Planning [24.849908664615104]
Flow-field extraction and discrete search are used to create Flow-Informed path planning.
A robot using FIPP was able not only to reach its goal more quickly but also was shown to be more socially compliant than a robot using traditional techniques.
arXiv Detail & Related papers (2022-06-01T18:40:57Z) - An Embarrassingly Pragmatic Introduction to Vision-based Autonomous
Robots [0.0]
We develop a small-scale autonomous vehicle capable of understanding the scene using only visual information.
We discuss the current state of Robotics and autonomous driving and the technological and ethical limitations that we can find in this field.
arXiv Detail & Related papers (2021-11-15T01:31:28Z) - From Movement Kinematics to Object Properties: Online Recognition of
Human Carefulness [112.28757246103099]
We show how a robot can infer online, from vision alone, whether or not the human partner is careful when moving an object.
We demonstrated that a humanoid robot could perform this inference with high accuracy (up to 81.3%) even with a low-resolution camera.
The prompt recognition of movement carefulness from observing the partner's action will allow robots to adapt their actions on the object to show the same degree of care as their human partners.
arXiv Detail & Related papers (2021-09-01T16:03:13Z) - Careful with That! Observation of Human Movements to Estimate Objects
Properties [106.925705883949]
We focus on the features of human motor actions that communicate insights on the weight of an object.
Our final goal is to enable a robot to autonomously infer the degree of care required in object handling.
arXiv Detail & Related papers (2021-03-02T08:14:56Z) - Enabling the Sense of Self in a Dual-Arm Robot [2.741266294612776]
We present a neural network architecture that enables a dual-arm robot to get a sense of itself in an environment.
We demonstrate experimentally that a robot can distinguish itself with an accuracy of 88.7% on average in cluttered environmental settings.
arXiv Detail & Related papers (2020-11-13T17:25:07Z) - Minimizing Robot Navigation-Graph For Position-Based Predictability By
Humans [20.13307800821161]
In situations where humans and robots are moving in the same space whilst performing their own tasks, predictable paths are vital.
The cognitive effort for the human to predict the robot's path becomes untenable as the number of robots increases.
We propose to minimize the navigation-graph of the robot for position-based predictability.
arXiv Detail & Related papers (2020-10-28T22:09:10Z) - Human Grasp Classification for Reactive Human-to-Robot Handovers [50.91803283297065]
We propose an approach for human-to-robot handovers in which the robot meets the human halfway.
We collect a human grasp dataset which covers typical ways of holding objects with various hand shapes and poses.
We present a planning and execution approach that takes the object from the human hand according to the detected grasp and hand position.
arXiv Detail & Related papers (2020-03-12T19:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.