Learning to Actively Reduce Memory Requirements for Robot Control Tasks
- URL: http://arxiv.org/abs/2008.07451v2
- Date: Sat, 14 Nov 2020 03:27:44 GMT
- Title: Learning to Actively Reduce Memory Requirements for Robot Control Tasks
- Authors: Meghan Booker and Anirudha Majumdar
- Abstract summary: State-of-the-art approaches for controlling robots often use memory representations that are excessively rich for the task or rely on hand-crafted tricks for memory efficiency.
This work provides a general approach for jointly synthesizing memory representations and policies.
- Score: 4.302265156822829
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Robots equipped with rich sensing modalities (e.g., RGB-D cameras) performing
long-horizon tasks motivate the need for policies that are highly
memory-efficient. State-of-the-art approaches for controlling robots often use
memory representations that are excessively rich for the task or rely on
hand-crafted tricks for memory efficiency. Instead, this work provides a
general approach for jointly synthesizing memory representations and policies;
the resulting policies actively seek to reduce memory requirements.
Specifically, we present a reinforcement learning framework that leverages an
implementation of the group LASSO regularization to synthesize policies that
employ low-dimensional and task-centric memory representations. We demonstrate
the efficacy of our approach with simulated examples including navigation in
discrete and continuous spaces as well as vision-based indoor navigation set in
a photo-realistic simulator. The results on these examples indicate that our
method is capable of finding policies that rely only on low-dimensional memory
representations, improving generalization, and actively reducing memory
requirements.
Related papers
- Memory, Benchmark & Robots: A Benchmark for Solving Complex Tasks with Reinforcement Learning [41.94295877935867]
We introduce MIKASA (Memory-Intensive Skills Assessment Suite for Agents), a comprehensive benchmark for memory RL.
We also develop MIKASA-Robo, a benchmark of 32 carefully designed memory-intensive tasks that assess memory capabilities in tabletop robotic manipulation.
Our contributions establish a unified framework for advancing memory RL research, driving the development of more reliable systems for real-world applications.
arXiv Detail & Related papers (2025-02-14T20:46:19Z) - Toward Task Generalization via Memory Augmentation in Meta-Reinforcement Learning [43.69919534800985]
In reinforcement learning (RL), agents often struggle to perform well on tasks that differ from those encountered during training.
This limitation presents a challenge to the broader deployment of RL in diverse and dynamic task settings.
We introduce memory augmentation, a memory-based RL approach to improve task generalization.
arXiv Detail & Related papers (2025-02-03T17:00:19Z) - 3D-Mem: 3D Scene Memory for Embodied Exploration and Reasoning [65.40458559619303]
We propose 3D-Mem, a novel 3D scene memory framework for embodied agents.
3D-Mem employs informative multi-view images, termed Memory Snapshots, to represent the scene.
It further integrates frontier-based exploration by introducing Frontier Snapshots-glimpses of unexplored areas-enabling agents to make informed decisions.
arXiv Detail & Related papers (2024-11-23T09:57:43Z) - DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution [114.61347672265076]
Development of MLLMs for real-world robots is challenging due to the typically limited computation and memory capacities available on robotic platforms.
We propose a Dynamic Early-Exit Framework for Robotic Vision-Language-Action Model (DeeR) that automatically adjusts the size of the activated MLLM.
DeeR demonstrates significant reductions in computational costs of LLM by 5.2-6.5x and GPU memory of LLM by 2-6x without compromising performance.
arXiv Detail & Related papers (2024-11-04T18:26:08Z) - Stable Hadamard Memory: Revitalizing Memory-Augmented Agents for Reinforcement Learning [64.93848182403116]
Current deep-learning memory models struggle in reinforcement learning environments that are partially observable and long-term.
We introduce the Stable Hadamard Memory, a novel memory model for reinforcement learning agents.
Our approach significantly outperforms state-of-the-art memory-based methods on challenging partially observable benchmarks.
arXiv Detail & Related papers (2024-10-14T03:50:17Z) - Think Before You Act: Decision Transformers with Working Memory [44.18926449252084]
Decision Transformer-based decision-making agents have shown the ability to generalize across multiple tasks.
We argue that this inefficiency stems from the forgetting phenomenon, in which a model memorizes its behaviors in parameters throughout training.
We propose a working memory module to store, blend, and retrieve information for different downstream tasks.
arXiv Detail & Related papers (2023-05-24T01:20:22Z) - Composable Learning with Sparse Kernel Representations [110.19179439773578]
We present a reinforcement learning algorithm for learning sparse non-parametric controllers in a Reproducing Kernel Hilbert Space.
We improve the sample complexity of this approach by imposing a structure of the state-action function through a normalized advantage function.
We demonstrate the performance of this algorithm on learning obstacle-avoidance policies in multiple simulations of a robot equipped with a laser scanner while navigating in a 2D environment.
arXiv Detail & Related papers (2021-03-26T13:58:23Z) - Semantically Constrained Memory Allocation (SCMA) for Embedding in
Efficient Recommendation Systems [27.419109620575313]
A key challenge for deep learning models is to work with millions of categorical classes or tokens.
We propose a novel formulation of memory shared embedding, where memory is shared in proportion to the overlap in semantic information.
We demonstrate a significant reduction in the memory footprint while maintaining performance.
arXiv Detail & Related papers (2021-02-24T19:55:49Z) - End-to-End Egospheric Spatial Memory [32.42361470456194]
We propose a parameter-free module, Egospheric Spatial Memory (ESM), which encodes the memory in an ego-sphere around the agent.
ESM can be trained end-to-end via either imitation or reinforcement learning.
We show applications to semantic segmentation on the ScanNet dataset, where ESM naturally combines image-level and map-level inference modalities.
arXiv Detail & Related papers (2021-02-15T18:59:07Z) - HM4: Hidden Markov Model with Memory Management for Visual Place
Recognition [54.051025148533554]
We develop a Hidden Markov Model approach for visual place recognition in autonomous driving.
Our algorithm, dubbed HM$4$, exploits temporal look-ahead to transfer promising candidate images between passive storage and active memory.
We show that this allows constant time and space inference for a fixed coverage area.
arXiv Detail & Related papers (2020-11-01T08:49:24Z) - Learning to Ignore: Long Document Coreference with Bounded Memory Neural
Networks [65.3963282551994]
We argue that keeping all entities in memory is unnecessary, and we propose a memory-augmented neural network that tracks only a small bounded number of entities at a time.
We show that (a) the model remains competitive with models with high memory and computational requirements on OntoNotes and LitBank, and (b) the model learns an efficient memory management strategy easily outperforming a rule-based strategy.
arXiv Detail & Related papers (2020-10-06T15:16:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.