Phase Transition Adaptation
- URL: http://arxiv.org/abs/2104.10132v1
- Date: Tue, 20 Apr 2021 17:18:34 GMT
- Title: Phase Transition Adaptation
- Authors: Claudio Gallicchio, Alessio Micheli, Luca Silvestri
- Abstract summary: We propose an extension of the original approach, a local unsupervised learning mechanism we call Phase Transition Adaptation.
We show experimentally that our approach consistently achieves its purpose over several datasets.
- Score: 14.034816857287044
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Artificial Recurrent Neural Networks are a powerful information processing
abstraction, and Reservoir Computing provides an efficient strategy to build
robust implementations by projecting external inputs into high dimensional
dynamical system trajectories. In this paper, we propose an extension of the
original approach, a local unsupervised learning mechanism we call Phase
Transition Adaptation, designed to drive the system dynamics towards the `edge
of stability'. Here, the complex behavior exhibited by the system elicits an
enhancement in its overall computational capacity. We show experimentally that
our approach consistently achieves its purpose over several datasets.
Related papers
- Enhancing Tabular Data Optimization with a Flexible Graph-based Reinforced Exploration Strategy [16.782884097690882]
Current frameworks for automated feature transformation rely on iterative sequence generation tasks.
Three cascading agents iteratively select nodes and idea mathematical operations to generate new transformation states.
This strategy leverages the inherent properties of the graph structure, allowing for the preservation and reuse of valuable transformations.
arXiv Detail & Related papers (2024-06-11T16:10:37Z) - Hallmarks of Optimization Trajectories in Neural Networks: Directional Exploration and Redundancy [75.15685966213832]
We analyze the rich directional structure of optimization trajectories represented by their pointwise parameters.
We show that training only scalar batchnorm parameters some while into training matches the performance of training the entire network.
arXiv Detail & Related papers (2024-03-12T07:32:47Z) - Amortized Network Intervention to Steer the Excitatory Point Processes [8.15558505134853]
Excitatory point processes (i.e., event flows) occurring over dynamic graphs provide a fine-grained model to capture how discrete events may spread over time and space.
How to effectively steer the event flows by modifying the dynamic graph structures presents an interesting problem, motivated by curbing the spread of infectious diseases.
We design an Amortized Network Interventions framework, allowing for the pooling of optimal policies from history and other contexts.
arXiv Detail & Related papers (2023-10-06T11:17:28Z) - Adaptive Growth: Real-time CNN Layer Expansion [0.0]
This research presents a new algorithm that allows the convolutional layer of a Convolutional Neural Network (CNN) to dynamically evolve based on data input.
Instead of a rigid architecture, our approach iteratively introduces kernels to the convolutional layer, gauging its real-time response to varying data.
Remarkably, our unsupervised method has outstripped its supervised counterparts across diverse datasets.
arXiv Detail & Related papers (2023-09-06T14:43:58Z) - Distributionally Robust Model-based Reinforcement Learning with Large
State Spaces [55.14361269378122]
Three major challenges in reinforcement learning are the complex dynamical systems with large state spaces, the costly data acquisition processes, and the deviation of real-world dynamics from the training environment deployment.
We study distributionally robust Markov decision processes with continuous state spaces under the widely used Kullback-Leibler, chi-square, and total variation uncertainty sets.
We propose a model-based approach that utilizes Gaussian Processes and the maximum variance reduction algorithm to efficiently learn multi-output nominal transition dynamics.
arXiv Detail & Related papers (2023-09-05T13:42:11Z) - Dynamic Kernel-Based Adaptive Spatial Aggregation for Learned Image
Compression [63.56922682378755]
We focus on extending spatial aggregation capability and propose a dynamic kernel-based transform coding.
The proposed adaptive aggregation generates kernel offsets to capture valid information in the content-conditioned range to help transform.
Experimental results demonstrate that our method achieves superior rate-distortion performance on three benchmarks compared to the state-of-the-art learning-based methods.
arXiv Detail & Related papers (2023-08-17T01:34:51Z) - Deep Augmentation: Self-Supervised Learning with Transformations in
Activation Space [18.655316096015937]
We introduce Deep Augmentation, an approach to implicit data augmentation using dropout or PCA to transform a targeted layer within a neural network to improve performance and generalization.
We demonstrate Deep Augmentation through extensive experiments on contrastive learning tasks in NLP, computer vision, and graph learning.
arXiv Detail & Related papers (2023-03-25T19:03:57Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Physics-Inspired Temporal Learning of Quadrotor Dynamics for Accurate
Model Predictive Trajectory Tracking [76.27433308688592]
Accurately modeling quadrotor's system dynamics is critical for guaranteeing agile, safe, and stable navigation.
We present a novel Physics-Inspired Temporal Convolutional Network (PI-TCN) approach to learning quadrotor's system dynamics purely from robot experience.
Our approach combines the expressive power of sparse temporal convolutions and dense feed-forward connections to make accurate system predictions.
arXiv Detail & Related papers (2022-06-07T13:51:35Z) - Resource-Efficient Invariant Networks: Exponential Gains by Unrolled
Optimization [8.37077056358265]
We propose a new computational primitive for building invariant networks based instead on optimization.
We provide empirical and theoretical corroboration of the efficiency gains and soundness of our proposed method.
We demonstrate its utility in constructing an efficient invariant network for a simple hierarchical object detection task.
arXiv Detail & Related papers (2022-03-09T19:04:08Z) - Capturing Actionable Dynamics with Structured Latent Ordinary
Differential Equations [68.62843292346813]
We propose a structured latent ODE model that captures system input variations within its latent representation.
Building on a static variable specification, our model learns factors of variation for each input to the system, thus separating the effects of the system inputs in the latent space.
arXiv Detail & Related papers (2022-02-25T20:00:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.