Learning Continuous Control Policies for Information-Theoretic Active
Perception
- URL: http://arxiv.org/abs/2209.12427v2
- Date: Tue, 16 May 2023 19:11:13 GMT
- Title: Learning Continuous Control Policies for Information-Theoretic Active
Perception
- Authors: Pengzhi Yang and Yuhan Liu and Shumon Koga and Arash Asgharivaskasi
and Nikolay Atanasov
- Abstract summary: We tackle the problem of learning a control policy that maximizes the mutual information between the landmark states and the sensor observations.
We employ a Kalman filter to convert the partially observable problem in the landmark state to Markov decision process (MDP), a differentiable field of view to shape the reward, and an attention-based neural network to represent the control policy.
- Score: 24.297016904005257
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes a method for learning continuous control policies for
active landmark localization and exploration using an information-theoretic
cost. We consider a mobile robot detecting landmarks within a limited sensing
range, and tackle the problem of learning a control policy that maximizes the
mutual information between the landmark states and the sensor observations. We
employ a Kalman filter to convert the partially observable problem in the
landmark state to Markov decision process (MDP), a differentiable field of view
to shape the reward, and an attention-based neural network to represent the
control policy. The approach is further unified with active volumetric mapping
to promote exploration in addition to landmark localization. The performance is
demonstrated in several simulated landmark localization tasks in comparison
with benchmark methods.
Related papers
- Learning Where to Look: Self-supervised Viewpoint Selection for Active Localization using Geometrical Information [68.10033984296247]
This paper explores the domain of active localization, emphasizing the importance of viewpoint selection to enhance localization accuracy.
Our contributions involve using a data-driven approach with a simple architecture designed for real-time operation, a self-supervised data training method, and the capability to consistently integrate our map into a planning framework tailored for real-world robotics applications.
arXiv Detail & Related papers (2024-07-22T12:32:09Z) - Local Feature Matching Using Deep Learning: A Survey [19.322545965903608]
Local feature matching enjoys wide-ranging applications in the realm of computer vision, encompassing domains such as image retrieval, 3D reconstruction, and object recognition.
In recent years, the introduction of deep learning models has sparked widespread exploration into local feature matching techniques.
The paper also explores the practical application of local feature matching in diverse domains such as Structure from Motion, Remote Sensing Image Registration, and Medical Image Registration.
arXiv Detail & Related papers (2024-01-31T04:32:41Z) - Background Activation Suppression for Weakly Supervised Object
Localization and Semantic Segmentation [84.62067728093358]
Weakly supervised object localization and semantic segmentation aim to localize objects using only image-level labels.
New paradigm has emerged by generating a foreground prediction map to achieve pixel-level localization.
This paper presents two astonishing experimental observations on the object localization learning process.
arXiv Detail & Related papers (2023-09-22T15:44:10Z) - ROIFormer: Semantic-Aware Region of Interest Transformer for Efficient
Self-Supervised Monocular Depth Estimation [6.923035780685481]
We propose an efficient local adaptive attention method for geometric aware representation enhancement.
We leverage geometric cues from semantic information to learn local adaptive bounding boxes to guide unsupervised feature aggregation.
Our proposed method establishes a new state-of-the-art in self-supervised monocular depth estimation task.
arXiv Detail & Related papers (2022-12-12T06:38:35Z) - Adaptive Local-Component-aware Graph Convolutional Network for One-shot
Skeleton-based Action Recognition [54.23513799338309]
We present an Adaptive Local-Component-aware Graph Convolutional Network for skeleton-based action recognition.
Our method provides a stronger representation than the global embedding and helps our model reach state-of-the-art.
arXiv Detail & Related papers (2022-09-21T02:33:07Z) - Point-Level Region Contrast for Object Detection Pre-Training [147.47349344401806]
We present point-level region contrast, a self-supervised pre-training approach for the task of object detection.
Our approach performs contrastive learning by directly sampling individual point pairs from different regions.
Compared to an aggregated representation per region, our approach is more robust to the change in input region quality.
arXiv Detail & Related papers (2022-02-09T18:56:41Z) - Active Visual Localization in Partially Calibrated Environments [35.48595012305253]
Humans can robustly localize themselves without a map after they get lost following prominent visual cues or landmarks.
In this work, we aim at endowing autonomous agents the same ability. Such ability is important in robotics applications yet very challenging when an agent is exposed to partially calibrated environments.
We propose an indoor scene dataset ACR-6, which consists of both synthetic and real data and simulates challenging scenarios for active visual localization.
arXiv Detail & Related papers (2020-12-08T08:00:55Z) - Localized active learning of Gaussian process state space models [63.97366815968177]
A globally accurate model is not required to achieve good performance in many common control applications.
We propose an active learning strategy for Gaussian process state space models that aims to obtain an accurate model on a bounded subset of the state-action space.
By employing model predictive control, the proposed technique integrates information collected during exploration and adaptively improves its exploration strategy.
arXiv Detail & Related papers (2020-05-04T05:35:02Z) - InfoBot: Transfer and Exploration via the Information Bottleneck [105.28380750802019]
A central challenge in reinforcement learning is discovering effective policies for tasks where rewards are sparsely distributed.
We propose to learn about decision states from prior experience.
We find that this simple mechanism effectively identifies decision states, even in partially observed settings.
arXiv Detail & Related papers (2019-01-30T15:33:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.