Quantum adaptive agents with efficient long-term memories
- URL: http://arxiv.org/abs/2108.10876v1
- Date: Tue, 24 Aug 2021 17:57:05 GMT
- Title: Quantum adaptive agents with efficient long-term memories
- Authors: Thomas J. Elliott, Mile Gu, Andrew J. P. Garner, Jayne Thompson
- Abstract summary: More information the agent must recall from its past experiences, the more memory it will need.
We uncover the most general form a quantum agent need adopt to maximise memory compression advantages.
We show these encodings can exhibit extremely favourable scaling advantages relative to memory-minimal classical agents.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Central to the success of adaptive systems is their ability to interpret
signals from their environment and respond accordingly -- they act as agents
interacting with their surroundings. Such agents typically perform better when
able to execute increasingly complex strategies. This comes with a cost: the
more information the agent must recall from its past experiences, the more
memory it will need. Here we investigate the power of agents capable of quantum
information processing. We uncover the most general form a quantum agent need
adopt to maximise memory compression advantages, and provide a systematic means
of encoding their memory states. We show these encodings can exhibit extremely
favourable scaling advantages relative to memory-minimal classical agents when
information must be retained about events increasingly far into the past.
Related papers
- SHERL: Synthesizing High Accuracy and Efficient Memory for Resource-Limited Transfer Learning [63.93193829913252]
We propose an innovative METL strategy called SHERL for resource-limited scenarios.
In the early route, intermediate outputs are consolidated via an anti-redundancy operation.
In the late route, utilizing minimal late pre-trained layers could alleviate the peak demand on memory overhead.
arXiv Detail & Related papers (2024-07-10T10:22:35Z) - Spatially-Aware Transformer for Embodied Agents [20.498778205143477]
This paper explores the use of Spatially-Aware Transformer models that incorporate spatial information.
We demonstrate that memory utilization efficiency can be improved, leading to enhanced accuracy in various place-centric downstream tasks.
We also propose the Adaptive Memory Allocator, a memory management method based on reinforcement learning.
arXiv Detail & Related papers (2024-02-23T07:46:30Z) - Large Content And Behavior Models To Understand, Simulate, And Optimize Content And Behavior [66.4024040742149]
We introduce the receivers' "behavior tokens," such as shares, likes, clicks, purchases, and retweets, in the LLM's training corpora to optimize content for the receivers and predict their behaviors.
Other than showing similar performance to LLMs on content understanding tasks, our trained models show generalization capabilities on the behavior dimension.
We call these models Large Content and Behavior Models (LCBMs)
arXiv Detail & Related papers (2023-09-01T09:34:49Z) - ShadowNet for Data-Centric Quantum System Learning [188.683909185536]
We propose a data-centric learning paradigm combining the strength of neural-network protocols and classical shadows.
Capitalizing on the generalization power of neural networks, this paradigm can be trained offline and excel at predicting previously unseen systems.
We present the instantiation of our paradigm in quantum state tomography and direct fidelity estimation tasks and conduct numerical analysis up to 60 qubits.
arXiv Detail & Related papers (2023-08-22T09:11:53Z) - A Machine with Short-Term, Episodic, and Semantic Memory Systems [4.6862970461449605]
Inspired by the cognitive science theory of the explicit human memory systems, we have modeled an agent with short-term, episodic, and semantic memory systems.
Our experiments indicate that an agent with human-like memory systems can outperform an agent without this memory structure in the environment.
arXiv Detail & Related papers (2022-12-05T08:34:23Z) - Efficient algorithms for quantum information bottleneck [64.67104066707309]
We propose a new and general algorithm for the quantum generalisation of information bottleneck.
Our algorithm excels in the speed and the definiteness of convergence compared with prior results.
Notably, we discover that a quantum system can achieve strictly better performance than a classical system of the same size regarding quantum information bottleneck.
arXiv Detail & Related papers (2022-08-22T14:20:05Z) - A Cognitive Architecture for Machine Consciousness and Artificial
Superintelligence: Thought Is Structured by the Iterative Updating of Working
Memory [0.0]
This article provides an analytical framework for how to simulate human-like thought processes within a computer.
It describes how attention and memory should be structured, updated, and utilized to search for associative additions to the stream of thought.
arXiv Detail & Related papers (2022-03-29T22:28:30Z) - Experimental quantum speed-up in reinforcement learning agents [0.17849902073068336]
reinforcement learning (RL) is an important paradigm within artificial intelligence (AI)
We present a RL experiment where the learning of an agent is boosted by utilizing a quantum communication channel with the environment.
We implement this learning protocol on a compact and fully tunable integrated nanophotonic processor.
arXiv Detail & Related papers (2021-03-10T19:01:12Z) - What can I do here? A Theory of Affordances in Reinforcement Learning [65.70524105802156]
We develop a theory of affordances for agents who learn and plan in Markov Decision Processes.
Affordances play a dual role in this case, by reducing the number of actions available in any given situation.
We propose an approach to learn affordances and use it to estimate transition models that are simpler and generalize better.
arXiv Detail & Related papers (2020-06-26T16:34:53Z) - Sequential Recommender via Time-aware Attentive Memory Network [67.26862011527986]
We propose a temporal gating methodology to improve attention mechanism and recurrent units.
We also propose a Multi-hop Time-aware Attentive Memory network to integrate long-term and short-term preferences.
Our approach is scalable for candidate retrieval tasks and can be viewed as a non-linear generalization of latent factorization for dot-product based Top-K recommendation.
arXiv Detail & Related papers (2020-05-18T11:29:38Z) - A Proposal for Intelligent Agents with Episodic Memory [0.9236074230806579]
We argue that an agent would benefit from an episodic memory.
This memory encodes the agent's experience in such a way that the agent can relive the experience.
We propose an architecture combining ANNs and standard Computer Science techniques for supporting storage and retrieval of episodic memories.
arXiv Detail & Related papers (2020-05-07T00:26:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.