Adaptive Machine Learning for Time-Varying Systems: Low Dimensional
Latent Space Tuning
- URL: http://arxiv.org/abs/2107.06207v1
- Date: Tue, 13 Jul 2021 16:05:28 GMT
- Title: Adaptive Machine Learning for Time-Varying Systems: Low Dimensional
Latent Space Tuning
- Authors: Alexander Scheinker
- Abstract summary: We present a recently developed method of adaptive machine learning for time-varying systems.
Our approach is to map very high (N>100k) dimensional inputs into the low dimensional (N2) latent space at the output of the encoder section of an encoder-decoder CNN.
This method allows us to learn correlations within and to track their evolution in real time based on feedback without interrupts.
- Score: 91.3755431537592
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning (ML) tools such as encoder-decoder convolutional neural
networks (CNN) can represent incredibly complex nonlinear functions which map
between combinations of images and scalars. For example, CNNs can be used to
map combinations of accelerator parameters and images which are 2D projections
of the 6D phase space distributions of charged particle beams as they are
transported between various particle accelerator locations. Despite their
strengths, applying ML to time-varying systems, or systems with shifting
distributions, is an open problem, especially for large systems for which
collecting new data for re-training is impractical or interrupts operations.
Particle accelerators are one example of large time-varying systems for which
collecting detailed training data requires lengthy dedicated beam measurements
which may no longer be available during regular operations. We present a
recently developed method of adaptive ML for time-varying systems. Our approach
is to map very high (N>100k) dimensional inputs (a combination of scalar
parameters and images) into the low dimensional (N~2) latent space at the
output of the encoder section of an encoder-decoder CNN. We then actively tune
the low dimensional latent space-based representation of complex system
dynamics by the addition of an adaptively tuned feedback vector directly before
the decoder sections builds back up to our image-based high-dimensional phase
space density representations. This method allows us to learn correlations
within and to quickly tune the characteristics of incredibly high parameter
systems and to track their evolution in real time based on feedback without
massive new data sets for re-training.
Related papers
- Disentangling Spatial and Temporal Learning for Efficient Image-to-Video
Transfer Learning [59.26623999209235]
We present DiST, which disentangles the learning of spatial and temporal aspects of videos.
The disentangled learning in DiST is highly efficient because it avoids the back-propagation of massive pre-trained parameters.
Extensive experiments on five benchmarks show that DiST delivers better performance than existing state-of-the-art methods by convincing gaps.
arXiv Detail & Related papers (2023-09-14T17:58:33Z) - Data-Free Learning of Reduced-Order Kinematics [54.85157881323157]
We produce a low-dimensional map whose image parameterizes a diverse yet low-energy submanifold of configurations.
We represent subspaces as neural networks that map a low-dimensional latent vector to the full configuration space.
This formulation is effective across a very general range of physical systems.
arXiv Detail & Related papers (2023-05-05T20:53:36Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - Neural Implicit Flow: a mesh-agnostic dimensionality reduction paradigm
of spatio-temporal data [4.996878640124385]
We propose a general framework called Neural Implicit Flow (NIF) that enables a mesh-agnostic, low-rank representation of large-scale, parametric, spatialtemporal data.
NIF consists of two modified multilayer perceptrons (i) ShapeNet, which isolates and represents the spatial complexity (i) ShapeNet, which accounts for any other input measurements, including parametric dependencies, time, and sensor measurements.
We demonstrate the utility of NIF for parametric surrogate modeling, enabling the interpretable representation and compression of complex spatial-temporal dynamics, efficient many-spatial-temporal generalization, and improved performance for sparse
arXiv Detail & Related papers (2022-04-07T05:02:58Z) - Faster hyperspectral image classification based on selective kernel
mechanism using deep convolutional networks [18.644268589334217]
This letter designed the Faster selective kernel mechanism network (FSKNet), FSKNet can balance this problem.
It designs 3D-CNN and 2D-CNN conversion modules, using 3D-CNN to complete feature extraction while reducing the dimensionality of spatial and spectrum.
FSKNet achieves high accuracy on the IN, UP, Salinas, and Botswana data sets with very small parameters.
arXiv Detail & Related papers (2022-02-14T02:14:50Z) - Adaptive Latent Space Tuning for Non-Stationary Distributions [62.997667081978825]
We present a method for adaptive tuning of the low-dimensional latent space of deep encoder-decoder style CNNs.
We demonstrate our approach for predicting the properties of a time-varying charged particle beam in a particle accelerator.
arXiv Detail & Related papers (2021-05-08T03:50:45Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z) - Non-linear State-space Model Identification from Video Data using Deep
Encoders [0.0]
We propose a novel non-linear state-space identification method starting from high-dimensional input and output data.
An encoder function, represented by a neural network, is introduced to learn a reconstructability map to estimate the model states from past inputs and outputs.
We apply the proposed method to a video stream of a simulated environment of a controllable ball in a unit box.
arXiv Detail & Related papers (2020-12-14T17:14:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.