Quantum adaptive agents with efficient long-term memories
- URL: http://arxiv.org/abs/2108.10876v1
- Date: Tue, 24 Aug 2021 17:57:05 GMT
- Title: Quantum adaptive agents with efficient long-term memories
- Authors: Thomas J. Elliott, Mile Gu, Andrew J. P. Garner, Jayne Thompson
- Abstract summary: More information the agent must recall from its past experiences, the more memory it will need.
We uncover the most general form a quantum agent need adopt to maximise memory compression advantages.
We show these encodings can exhibit extremely favourable scaling advantages relative to memory-minimal classical agents.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Central to the success of adaptive systems is their ability to interpret
signals from their environment and respond accordingly -- they act as agents
interacting with their surroundings. Such agents typically perform better when
able to execute increasingly complex strategies. This comes with a cost: the
more information the agent must recall from its past experiences, the more
memory it will need. Here we investigate the power of agents capable of quantum
information processing. We uncover the most general form a quantum agent need
adopt to maximise memory compression advantages, and provide a systematic means
of encoding their memory states. We show these encodings can exhibit extremely
favourable scaling advantages relative to memory-minimal classical agents when
information must be retained about events increasingly far into the past.
Related papers
- Communication Learning in Multi-Agent Systems from Graph Modeling Perspective [62.13508281188895]
We introduce a novel approach wherein we conceptualize the communication architecture among agents as a learnable graph.
We introduce a temporal gating mechanism for each agent, enabling dynamic decisions on whether to receive shared information at a given time.
arXiv Detail & Related papers (2024-11-01T05:56:51Z) - Prioritized Generative Replay [121.83947140497655]
We propose a prioritized, parametric version of an agent's memory, using generative models to capture online experience.
This paradigm enables densification of past experience, with new generations that benefit from the generative model's generalization capacity.
We show this recipe can be instantiated using conditional diffusion models and simple relevance functions.
arXiv Detail & Related papers (2024-10-23T17:59:52Z) - Stable Hadamard Memory: Revitalizing Memory-Augmented Agents for Reinforcement Learning [64.93848182403116]
Current deep-learning memory models struggle in reinforcement learning environments that are partially observable and long-term.
We introduce the Stable Hadamard Memory, a novel memory model for reinforcement learning agents.
Our approach significantly outperforms state-of-the-art memory-based methods on challenging partially observable benchmarks.
arXiv Detail & Related papers (2024-10-14T03:50:17Z) - SHERL: Synthesizing High Accuracy and Efficient Memory for Resource-Limited Transfer Learning [63.93193829913252]
We propose an innovative METL strategy called SHERL for resource-limited scenarios.
In the early route, intermediate outputs are consolidated via an anti-redundancy operation.
In the late route, utilizing minimal late pre-trained layers could alleviate the peak demand on memory overhead.
arXiv Detail & Related papers (2024-07-10T10:22:35Z) - Spatially-Aware Transformer for Embodied Agents [20.498778205143477]
This paper explores the use of Spatially-Aware Transformer models that incorporate spatial information.
We demonstrate that memory utilization efficiency can be improved, leading to enhanced accuracy in various place-centric downstream tasks.
We also propose the Adaptive Memory Allocator, a memory management method based on reinforcement learning.
arXiv Detail & Related papers (2024-02-23T07:46:30Z) - ShadowNet for Data-Centric Quantum System Learning [188.683909185536]
We propose a data-centric learning paradigm combining the strength of neural-network protocols and classical shadows.
Capitalizing on the generalization power of neural networks, this paradigm can be trained offline and excel at predicting previously unseen systems.
We present the instantiation of our paradigm in quantum state tomography and direct fidelity estimation tasks and conduct numerical analysis up to 60 qubits.
arXiv Detail & Related papers (2023-08-22T09:11:53Z) - A Machine with Short-Term, Episodic, and Semantic Memory Systems [9.42475956340287]
Inspired by the cognitive science theory of the explicit human memory systems, we have modeled an agent with short-term, episodic, and semantic memory systems.
Our experiments indicate that an agent with human-like memory systems can outperform an agent without this memory structure in the environment.
arXiv Detail & Related papers (2022-12-05T08:34:23Z) - Efficient algorithms for quantum information bottleneck [64.67104066707309]
We propose a new and general algorithm for the quantum generalisation of information bottleneck.
Our algorithm excels in the speed and the definiteness of convergence compared with prior results.
Notably, we discover that a quantum system can achieve strictly better performance than a classical system of the same size regarding quantum information bottleneck.
arXiv Detail & Related papers (2022-08-22T14:20:05Z) - Experimental quantum speed-up in reinforcement learning agents [0.17849902073068336]
reinforcement learning (RL) is an important paradigm within artificial intelligence (AI)
We present a RL experiment where the learning of an agent is boosted by utilizing a quantum communication channel with the environment.
We implement this learning protocol on a compact and fully tunable integrated nanophotonic processor.
arXiv Detail & Related papers (2021-03-10T19:01:12Z) - A Proposal for Intelligent Agents with Episodic Memory [0.9236074230806579]
We argue that an agent would benefit from an episodic memory.
This memory encodes the agent's experience in such a way that the agent can relive the experience.
We propose an architecture combining ANNs and standard Computer Science techniques for supporting storage and retrieval of episodic memories.
arXiv Detail & Related papers (2020-05-07T00:26:42Z) - Augmented Replay Memory in Reinforcement Learning With Continuous
Control [1.6752182911522522]
Online reinforcement learning agents are currently able to process an increasing amount of data by converting it into a higher order value functions.
This expansion increases the agent's state space enabling it to scale up to a more complex problems but also increases the risk of forgetting by learning on redundant or conflicting data.
To improve the approximation of a large amount of data, a random mini-batch of the past experiences that are stored in the replay memory buffer is often replayed at each learning step.
arXiv Detail & Related papers (2019-12-29T20:07:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.