Exploring the limits of multifunctionality across different reservoir
computers
- URL: http://arxiv.org/abs/2205.11375v1
- Date: Mon, 23 May 2022 15:06:38 GMT
- Title: Exploring the limits of multifunctionality across different reservoir
computers
- Authors: Andrew Flynn, Oliver Heilmann, Daniel K\"oglmayr, Vassilios A.
Tsachouridis, Christoph R\"ath, and Andreas Amann
- Abstract summary: We explore the performance of a continuous-time, leaky-integrator, and next-generation reservoir computer' (RC)
We train each RC to reconstruct a coexistence of chaotic attractors from different dynamical systems.
We examine the critical effects that certain parameters can have in each RC to achieve multifunctionality.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multifunctional neural networks are capable of performing more than one task
without changing any network connections. In this paper we explore the
performance of a continuous-time, leaky-integrator, and next-generation
`reservoir computer' (RC), when trained on tasks which test the limits of
multifunctionality. In the first task we train each RC to reconstruct a
coexistence of chaotic attractors from different dynamical systems. By moving
the data describing these attractors closer together, we find that the extent
to which each RC can reconstruct both attractors diminishes as they begin to
overlap in state space. In order to provide a greater understanding of this
inhibiting effect, in the second task we train each RC to reconstruct a
coexistence of two circular orbits which differ only in the direction of
rotation. We examine the critical effects that certain parameters can have in
each RC to achieve multifunctionality in this extreme case of completely
overlapping training data.
Related papers
- Reinforcement Learning with Action Sequence for Data-Efficient Robot Learning [62.3886343725955]
We introduce a novel RL algorithm that learns a critic network that outputs Q-values over a sequence of actions.
By explicitly training the value functions to learn the consequence of executing a series of current and future actions, our algorithm allows for learning useful value functions from noisy trajectories.
arXiv Detail & Related papers (2024-11-19T01:23:52Z) - Renormalized Connection for Scale-preferred Object Detection in Satellite Imagery [51.83786195178233]
We design a Knowledge Discovery Network (KDN) to implement the renormalization group theory in terms of efficient feature extraction.
Renormalized connection (RC) on the KDN enables synergistic focusing'' of multi-scale features.
RCs extend the multi-level feature's divide-and-conquer'' mechanism of the FPN-based detectors to a wide range of scale-preferred tasks.
arXiv Detail & Related papers (2024-09-09T13:56:22Z) - Exploring the origins of switching dynamics in a multifunctional reservoir computer [0.0]
Reservoir computers (RCs) reconstruct multiple attractors simultaneously using the same set of trained weights.
In certain cases, if the RC fails to reconstruct a coexistence of attractors then it exhibits a form of metastability.
This paper explores the origins of these switching dynamics in a paradigmatic setting via the seeing double' problem.
arXiv Detail & Related papers (2024-08-27T20:51:48Z) - Multifunctionality in a Connectome-Based Reservoir Computer [0.0]
The 'fruit fly RC' (FFRC) exhibits multifunctionality using the'seeing double' problem as a benchmark test.
Compared to the widely-used Erd"os-Renyi Reservoir Computer (ERRC), we report that the FFRC exhibits a greater capacity for multifunctionality.
arXiv Detail & Related papers (2023-06-02T19:37:38Z) - Seeing double with a multifunctional reservoir computer [0.0]
Multifunctional biological neural networks exploit multistability in order to perform multiple tasks without changing any network properties.
We study how a reservoir computer reconstructs a coexistence of attractors when there is an overlap between them.
A bifurcation analysis reveals how multifunctionality emerges and is destroyed as the RC enters a chaotic regime.
arXiv Detail & Related papers (2023-05-09T23:10:29Z) - Continual Object Detection via Prototypical Task Correlation Guided
Gating Mechanism [120.1998866178014]
We present a flexible framework for continual object detection via pRotOtypical taSk corrElaTion guided gaTingAnism (ROSETTA)
Concretely, a unified framework is shared by all tasks while task-aware gates are introduced to automatically select sub-models for specific tasks.
Experiments on COCO-VOC, KITTI-Kitchen, class-incremental detection on VOC and sequential learning of four tasks show that ROSETTA yields state-of-the-art performance.
arXiv Detail & Related papers (2022-05-06T07:31:28Z) - Centralizing State-Values in Dueling Networks for Multi-Robot
Reinforcement Learning Mapless Navigation [87.85646257351212]
We study the problem of multi-robot mapless navigation in the popular Training and Decentralized Execution (CTDE) paradigm.
This problem is challenging when each robot considers its path without explicitly sharing observations with other robots.
We propose a novel architecture for CTDE that uses a centralized state-value network to compute a joint state-value.
arXiv Detail & Related papers (2021-12-16T16:47:00Z) - Multi-task Over-the-Air Federated Learning: A Non-Orthogonal
Transmission Approach [52.85647632037537]
We propose a multi-task over-theair federated learning (MOAFL) framework, where multiple learning tasks share edge devices for data collection and learning models under the coordination of a edge server (ES)
Both the convergence analysis and numerical results demonstrate that the MOAFL framework can significantly reduce the uplink bandwidth consumption of multiple tasks without causing substantial learning performance degradation.
arXiv Detail & Related papers (2021-06-27T13:09:32Z) - Multi-task Supervised Learning via Cross-learning [102.64082402388192]
We consider a problem known as multi-task learning, consisting of fitting a set of regression functions intended for solving different tasks.
In our novel formulation, we couple the parameters of these functions, so that they learn in their task specific domains while staying close to each other.
This facilitates cross-fertilization in which data collected across different domains help improving the learning performance at each other task.
arXiv Detail & Related papers (2020-10-24T21:35:57Z) - Reparameterizing Convolutions for Incremental Multi-Task Learning
without Task Interference [75.95287293847697]
Two common challenges in developing multi-task models are often overlooked in literature.
First, enabling the model to be inherently incremental, continuously incorporating information from new tasks without forgetting the previously learned ones (incremental learning)
Second, eliminating adverse interactions amongst tasks, which has been shown to significantly degrade the single-task performance in a multi-task setup (task interference)
arXiv Detail & Related papers (2020-07-24T14:44:46Z) - A Framework for Automatic Behavior Generation in Multi-Function Swarms [1.290382979353427]
A framework for automatic behavior generation in multi-function swarms is proposed.
The framework is tested on a scenario with three simultaneous tasks.
The effect of noise on the behavior characteristics in MAP-elites is investigated.
arXiv Detail & Related papers (2020-07-11T20:50:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.