SCIM: Simultaneous Clustering, Inference, and Mapping for Open-World
Semantic Scene Understanding
- URL: http://arxiv.org/abs/2206.10670v1
- Date: Tue, 21 Jun 2022 18:41:51 GMT
- Title: SCIM: Simultaneous Clustering, Inference, and Mapping for Open-World
Semantic Scene Understanding
- Authors: Hermann Blum, Marcus G. M\"uller, Abel Gawel, Roland Siegwart, Cesar
Cadena
- Abstract summary: We show how a robot can autonomously discover novel semantic classes and improve accuracy on known classes when exploring an unknown environment.
We develop a general framework for mapping and clustering that we then use to generate a self-supervised learning signal to update a semantic segmentation model.
In particular, we show how clustering parameters can be optimized during deployment and that fusion of multiple observation modalities improves novel object discovery compared to prior work.
- Score: 34.19666841489646
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In order to operate in human environments, a robot's semantic perception has
to overcome open-world challenges such as novel objects and domain gaps.
Autonomous deployment to such environments therefore requires robots to update
their knowledge and learn without supervision. We investigate how a robot can
autonomously discover novel semantic classes and improve accuracy on known
classes when exploring an unknown environment. To this end, we develop a
general framework for mapping and clustering that we then use to generate a
self-supervised learning signal to update a semantic segmentation model. In
particular, we show how clustering parameters can be optimized during
deployment and that fusion of multiple observation modalities improves novel
object discovery compared to prior work.
Related papers
- Embodied-RAG: General Non-parametric Embodied Memory for Retrieval and Generation [65.23793829741014]
Embodied-RAG is a framework that enhances the model of an embodied agent with a non-parametric memory system.
At its core, Embodied-RAG's memory is structured as a semantic forest, storing language descriptions at varying levels of detail.
We demonstrate that Embodied-RAG effectively bridges RAG to the robotics domain, successfully handling over 200 explanation and navigation queries.
arXiv Detail & Related papers (2024-09-26T21:44:11Z) - Learning Where to Look: Self-supervised Viewpoint Selection for Active Localization using Geometrical Information [68.10033984296247]
This paper explores the domain of active localization, emphasizing the importance of viewpoint selection to enhance localization accuracy.
Our contributions involve using a data-driven approach with a simple architecture designed for real-time operation, a self-supervised data training method, and the capability to consistently integrate our map into a planning framework tailored for real-world robotics applications.
arXiv Detail & Related papers (2024-07-22T12:32:09Z) - Semi-Supervised Active Learning for Semantic Segmentation in Unknown
Environments Using Informative Path Planning [27.460481202195012]
Self-supervised and fully supervised active learning methods emerged to improve a robot's vision.
We propose a planning method for semi-supervised active learning of semantic segmentation.
We leverage an adaptive map-based planner guided towards the frontiers of unexplored space with high model uncertainty.
arXiv Detail & Related papers (2023-12-07T16:16:47Z) - Robot Skill Generalization via Keypoint Integrated Soft Actor-Critic
Gaussian Mixture Models [21.13906762261418]
A long-standing challenge for a robotic manipulation system is adapting and generalizing its acquired motor skills to unseen environments.
We tackle this challenge employing hybrid skill models that integrate imitation and reinforcement paradigms.
We show that our method enables a robot to gain a significant zero-shot generalization to novel environments and to refine skills in the target environments faster than learning from scratch.
arXiv Detail & Related papers (2023-10-23T16:03:23Z) - Transferring Foundation Models for Generalizable Robotic Manipulation [82.12754319808197]
We propose a novel paradigm that effectively leverages language-reasoning segmentation mask generated by internet-scale foundation models.
Our approach can effectively and robustly perceive object pose and enable sample-efficient generalization learning.
Demos can be found in our submitted video, and more comprehensive ones can be found in link1 or link2.
arXiv Detail & Related papers (2023-06-09T07:22:12Z) - Stochastic Coherence Over Attention Trajectory For Continuous Learning
In Video Streams [64.82800502603138]
This paper proposes a novel neural-network-based approach to progressively and autonomously develop pixel-wise representations in a video stream.
The proposed method is based on a human-like attention mechanism that allows the agent to learn by observing what is moving in the attended locations.
Our experiments leverage 3D virtual environments and they show that the proposed agents can learn to distinguish objects just by observing the video stream.
arXiv Detail & Related papers (2022-04-26T09:52:31Z) - HARPS: An Online POMDP Framework for Human-Assisted Robotic Planning and
Sensing [1.3678064890824186]
The Human Assisted Robotic Planning and Sensing (HARPS) framework is presented for active semantic sensing and planning in human-robot teams.
This approach lets humans opportunistically impose model structure and extend the range of semantic soft data in uncertain environments.
Simulations of a UAV-enabled target search application in a large-scale partially structured environment show significant improvements in time and belief state estimates.
arXiv Detail & Related papers (2021-10-20T00:41:57Z) - Language Understanding for Field and Service Robots in a Priori Unknown
Environments [29.16936249846063]
This paper provides a novel learning framework that allows field and service robots to interpret and execute natural language instructions.
We use language as a "sensor" -- inferring spatial, topological, and semantic information implicit in natural language utterances.
We incorporate this distribution in a probabilistic language grounding model and infer a distribution over a symbolic representation of the robot's action space.
arXiv Detail & Related papers (2021-05-21T15:13:05Z) - Self-Improving Semantic Perception on a Construction Robot [6.823936426747797]
We propose a framework in which semantic models are continuously updated on the robot to adapt to the deployment environments.
Our system therefore tightly couples multi-sensor perception and localisation to continuously learn from self-supervised pseudo labels.
arXiv Detail & Related papers (2021-05-04T16:06:12Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - Model-Based Visual Planning with Self-Supervised Functional Distances [104.83979811803466]
We present a self-supervised method for model-based visual goal reaching.
Our approach learns entirely using offline, unlabeled data.
We find that this approach substantially outperforms both model-free and model-based prior methods.
arXiv Detail & Related papers (2020-12-30T23:59:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.