Robot Active Neural Sensing and Planning in Unknown Cluttered
Environments
- URL: http://arxiv.org/abs/2208.11079v2
- Date: Wed, 24 Aug 2022 00:52:09 GMT
- Title: Robot Active Neural Sensing and Planning in Unknown Cluttered
Environments
- Authors: Hanwen Ren, Ahmed H. Qureshi
- Abstract summary: Active sensing and planning in unknown, cluttered environments is an open challenge for robots intending to provide home service, search and rescue, narrow-passage inspection, and medical assistance.
We present the active neural sensing approach that generates the kinematically feasible viewpoint sequences for the robot manipulator with an in-hand camera to gather the minimum number of observations needed to reconstruct the underlying environment.
Our framework actively collects the visual RGBD observations, aggregates them into scene representation, and performs object shape inference to avoid unnecessary robot interactions with the environment.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active sensing and planning in unknown, cluttered environments is an open
challenge for robots intending to provide home service, search and rescue,
narrow-passage inspection, and medical assistance. Although many active sensing
methods exist, they often consider open spaces, assume known settings, or
mostly do not generalize to real-world scenarios. We present the active neural
sensing approach that generates the kinematically feasible viewpoint sequences
for the robot manipulator with an in-hand camera to gather the minimum number
of observations needed to reconstruct the underlying environment. Our framework
actively collects the visual RGBD observations, aggregates them into scene
representation, and performs object shape inference to avoid unnecessary robot
interactions with the environment. We train our approach on synthetic data with
domain randomization and demonstrate its successful execution via sim-to-real
transfer in reconstructing narrow, covered, real-world cabinet environments
cluttered with unknown objects. The natural cabinet scenarios impose
significant challenges for robot motion and scene reconstruction due to
surrounding obstacles and low ambient lighting conditions. However, despite
unfavorable settings, our method exhibits high performance compared to its
baselines in terms of various environment reconstruction metrics, including
planning speed, the number of viewpoints, and overall scene coverage.
Related papers
- ReALFRED: An Embodied Instruction Following Benchmark in Photo-Realistic Environments [13.988804095409133]
We propose the ReALFRED benchmark that employs real-world scenes, objects, and room layouts to learn agents to complete household tasks.
Specifically, we extend the ALFRED benchmark with updates for larger environmental spaces with smaller visual domain gaps.
With ReALFRED, we analyze previously crafted methods for the ALFRED benchmark and observe that they consistently yield lower performance in all metrics.
arXiv Detail & Related papers (2024-07-26T07:00:27Z) - Outlier-Robust Long-Term Robotic Mapping Leveraging Ground Segmentation [1.7948767405202701]
I propose a robust long-term robotic mapping system that can work out of the box.
I propose (i) fast and robust ground segmentation to reject the presence of outliers.
I propose (ii)-robust registration with ground segmentation that encompasses the presence of gross outliers.
arXiv Detail & Related papers (2024-05-18T04:56:15Z) - AI planning in the imagination: High-level planning on learned abstract
search spaces [68.75684174531962]
We propose a new method, called PiZero, that gives an agent the ability to plan in an abstract search space that the agent learns during training.
We evaluate our method on multiple domains, including the traveling salesman problem, Sokoban, 2048, the facility location problem, and Pacman.
arXiv Detail & Related papers (2023-08-16T22:47:16Z) - Embodied Agents for Efficient Exploration and Smart Scene Description [47.82947878753809]
We tackle a setting for visual navigation in which an autonomous agent needs to explore and map an unseen indoor environment.
We propose and evaluate an approach that combines recent advances in visual robotic exploration and image captioning on images.
Our approach can generate smart scene descriptions that maximize semantic knowledge of the environment and avoid repetitions.
arXiv Detail & Related papers (2023-01-17T19:28:01Z) - Incremental 3D Scene Completion for Safe and Efficient Exploration
Mapping and Planning [60.599223456298915]
We propose a novel way to integrate deep learning into exploration by leveraging 3D scene completion for informed, safe, and interpretable mapping and planning.
We show that our method can speed up coverage of an environment by 73% compared to the baselines with only minimal reduction in map accuracy.
Even if scene completions are not included in the final map, we show that they can be used to guide the robot to choose more informative paths, speeding up the measurement of the scene with the robot's sensors by 35%.
arXiv Detail & Related papers (2022-08-17T14:19:33Z) - Nonprehensile Riemannian Motion Predictive Control [57.295751294224765]
We introduce a novel Real-to-Sim reward analysis technique to reliably imagine and predict the outcome of taking possible actions for a real robotic platform.
We produce a closed-loop controller to reactively push objects in a continuous action space.
We observe that RMPC is robust in cluttered as well as occluded environments and outperforms the baselines.
arXiv Detail & Related papers (2021-11-15T18:50:04Z) - Maintaining a Reliable World Model using Action-aware Perceptual
Anchoring [4.971403153199917]
There is a need for robots to maintain a model of its surroundings even when objects go out of view and are no longer visible.
This requires anchoring perceptual information onto symbols that represent the objects in the environment.
We present a model for action-aware perceptual anchoring that enables robots to track objects in a persistent manner.
arXiv Detail & Related papers (2021-07-07T06:35:14Z) - Self-Improving Semantic Perception on a Construction Robot [6.823936426747797]
We propose a framework in which semantic models are continuously updated on the robot to adapt to the deployment environments.
Our system therefore tightly couples multi-sensor perception and localisation to continuously learn from self-supervised pseudo labels.
arXiv Detail & Related papers (2021-05-04T16:06:12Z) - Mutual Information Maximization for Robust Plannable Representations [82.83676853746742]
We present MIRO, an information theoretic representational learning algorithm for model-based reinforcement learning.
We show that our approach is more robust than reconstruction objectives in the presence of distractors and cluttered scenes.
arXiv Detail & Related papers (2020-05-16T21:58:47Z) - SAPIEN: A SimulAted Part-based Interactive ENvironment [77.4739790629284]
SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects.
We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks.
arXiv Detail & Related papers (2020-03-19T00:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.