Active Exploration based on Information Gain by Particle Filter for
Efficient Spatial Concept Formation
- URL: http://arxiv.org/abs/2211.10934v2
- Date: Mon, 12 Jun 2023 06:10:13 GMT
- Title: Active Exploration based on Information Gain by Particle Filter for
Efficient Spatial Concept Formation
- Authors: Akira Taniguchi, Yoshiki Tabuchi, Tomochika Ishikawa, Lotfi El Hafi,
Yoshinobu Hagiwara, Tadahiro Taniguchi
- Abstract summary: We propose an active inference method, referred to as spatial concept formation with information gain-based active exploration.
This study interprets the robot's action as a selection of destinations to ask the user, What kind of place is this?' in the context of active inference.
Our experiment demonstrated the effectiveness of the SpCoAE in efficiently determining a destination for learning appropriate spatial concepts in home environments.
- Score: 5.350057408744861
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous robots need to learn the categories of various places by exploring
their environments and interacting with users. However, preparing training
datasets with linguistic instructions from users is time-consuming and
labor-intensive. Moreover, effective exploration is essential for appropriate
concept formation and rapid environmental coverage. To address this issue, we
propose an active inference method, referred to as spatial concept formation
with information gain-based active exploration (SpCoAE) that combines
sequential Bayesian inference using particle filters and information gain-based
destination determination in a probabilistic generative model. This study
interprets the robot's action as a selection of destinations to ask the user,
`What kind of place is this?' in the context of active inference. This study
provides insights into the technical aspects of the proposed method, including
active perception and exploration by the robot, and how the method can enable
mobile robots to learn spatial concepts through active exploration. Our
experiment demonstrated the effectiveness of the SpCoAE in efficiently
determining a destination for learning appropriate spatial concepts in home
environments.
Related papers
- CON: Continual Object Navigation via Data-Free Inter-Agent Knowledge Transfer in Unseen and Unfamiliar Places [1.474723404975345]
This work explores the potential of brief inter-agent knowledge transfer (KT) to enhance the robotic object goal navigation (ON)
We frame this process as a data-free continual learning (CL) challenge, aiming to transfer knowledge from a black-box model (teacher) to a new model (student)
To address this gap, we propose a lightweight, plug-and-play KT module targeting non-cooperative black-box teachers in open-world settings.
arXiv Detail & Related papers (2024-09-23T10:50:11Z) - KOI: Accelerating Online Imitation Learning via Hybrid Key-state Guidance [51.09834120088799]
We introduce the hybrid Key-state guided Online Imitation (KOI) learning method.
We use visual-language models to extract semantic key states from expert trajectory, indicating the objectives of "what to do"
Within the intervals between semantic key states, optical flow is employed to capture motion key states to understand the mechanisms of "how to do"
arXiv Detail & Related papers (2024-08-06T02:53:55Z) - Learning Where to Look: Self-supervised Viewpoint Selection for Active Localization using Geometrical Information [68.10033984296247]
This paper explores the domain of active localization, emphasizing the importance of viewpoint selection to enhance localization accuracy.
Our contributions involve using a data-driven approach with a simple architecture designed for real-time operation, a self-supervised data training method, and the capability to consistently integrate our map into a planning framework tailored for real-world robotics applications.
arXiv Detail & Related papers (2024-07-22T12:32:09Z) - Embodied Agents for Efficient Exploration and Smart Scene Description [47.82947878753809]
We tackle a setting for visual navigation in which an autonomous agent needs to explore and map an unseen indoor environment.
We propose and evaluate an approach that combines recent advances in visual robotic exploration and image captioning on images.
Our approach can generate smart scene descriptions that maximize semantic knowledge of the environment and avoid repetitions.
arXiv Detail & Related papers (2023-01-17T19:28:01Z) - Map Induction: Compositional spatial submap learning for efficient
exploration in novel environments [25.00757828975447]
We show that humans explore new environments efficiently by inferring the structure of unobserved spaces.
Using a new behavioral Map Induction Task, we demonstrate that this computational framework explains human exploration behavior better than non-inductive models.
arXiv Detail & Related papers (2021-10-23T21:23:04Z) - Adaptive Informative Path Planning Using Deep Reinforcement Learning for
UAV-based Active Sensing [2.6519061087638014]
We propose a new approach for informative path planning based on deep reinforcement learning (RL)
Our method combines Monte Carlo tree search with an offline-learned neural network predicting informative sensing actions.
By deploying the trained network during a mission, our method enables sample-efficient online replanning on physical platforms with limited computational resources.
arXiv Detail & Related papers (2021-09-28T09:00:55Z) - Rapid Exploration for Open-World Navigation with Latent Goal Models [78.45339342966196]
We describe a robotic learning system for autonomous exploration and navigation in diverse, open-world environments.
At the core of our method is a learned latent variable model of distances and actions, along with a non-parametric topological memory of images.
We use an information bottleneck to regularize the learned policy, giving us (i) a compact visual representation of goals, (ii) improved generalization capabilities, and (iii) a mechanism for sampling feasible goals for exploration.
arXiv Detail & Related papers (2021-04-12T23:14:41Z) - ViNG: Learning Open-World Navigation with Visual Goals [82.84193221280216]
We propose a learning-based navigation system for reaching visually indicated goals.
We show that our system, which we call ViNG, outperforms previously-proposed methods for goal-conditioned reinforcement learning.
We demonstrate ViNG on a number of real-world applications, such as last-mile delivery and warehouse inspection.
arXiv Detail & Related papers (2020-12-17T18:22:32Z) - Object Goal Navigation using Goal-Oriented Semantic Exploration [98.14078233526476]
This work studies the problem of object goal navigation which involves navigating to an instance of the given object category in unseen environments.
We propose a modular system called, Goal-Oriented Semantic Exploration' which builds an episodic semantic map and uses it to explore the environment efficiently.
arXiv Detail & Related papers (2020-07-01T17:52:32Z) - Spatial Concept-Based Navigation with Human Speech Instructions via
Probabilistic Inference on Bayesian Generative Model [8.851071399120542]
The aim of this study is to enable a mobile robot to perform navigational tasks with human speech instructions.
Path planning was formalized as the spatial probabilistic distribution on the path-trajectory under speech instruction.
We demonstrated path planning based on human instruction using acquired spatial concepts to verify the usefulness of the proposed approach in the simulator and in real environments.
arXiv Detail & Related papers (2020-02-18T05:35:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.