Split Learning Meets Koopman Theory for Wireless Remote Monitoring and
Prediction
- URL: http://arxiv.org/abs/2104.08109v1
- Date: Fri, 16 Apr 2021 13:34:01 GMT
- Title: Split Learning Meets Koopman Theory for Wireless Remote Monitoring and
Prediction
- Authors: Abanoub M. Girgis, Hyowoon Seo, Jihong Park, Mehdi Bennis, and Jinho
Choi
- Abstract summary: We propose to train an autoencoder whose encoder and decoder are split and stored at a state sensor and its remote observer, respectively.
This autoencoder not only decreases the remote monitoring payload size by reducing the state representation dimension, but also learns the system dynamics by lifting it via a Koopman operator.
Numerical results under a non-linear cart-pole environment demonstrate that the proposed split learning of a Koopman autoencoder can locally predict future states, and the prediction accuracy increases with the representation dimension and transmission power.
- Score: 76.88643211266168
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Remote state monitoring over wireless is envisaged to play a pivotal role in
enabling beyond 5G applications ranging from remote drone control to remote
surgery. One key challenge is to identify the system dynamics that is
non-linear with a large dimensional state. To obviate this issue, in this
article we propose to train an autoencoder whose encoder and decoder are split
and stored at a state sensor and its remote observer, respectively. This
autoencoder not only decreases the remote monitoring payload size by reducing
the state representation dimension, but also learns the system dynamics by
lifting it via a Koopman operator, thereby allowing the observer to locally
predict future states after training convergence. Numerical results under a
non-linear cart-pole environment demonstrate that the proposed split learning
of a Koopman autoencoder can locally predict future states, and the prediction
accuracy increases with the representation dimension and transmission power.
Related papers
- Unsupervised Stereo Matching Network For VHR Remote Sensing Images Based On Error Prediction [5.68487023151187]
We propose a novel unsupervised stereo matching network for VHR remote sensing images.
A light-weight module to bridge confidence with predicted error is introduced to refine the core model.
The experimental results on US3D and WHU-Stereo datasets demonstrate that the proposed network achieves superior accuracy compared to other unsupervised networks.
arXiv Detail & Related papers (2024-08-14T09:59:04Z) - Multistep Inverse Is Not All You Need [87.62730694973696]
In real-world control settings, the observation space is often unnecessarily high-dimensional and subject to time-correlated noise.
It is therefore desirable to learn an encoder to map the observation space to a simpler space of control-relevant variables.
We propose a new algorithm, ACDF, which combines multistep-inverse prediction with a latent forward model.
arXiv Detail & Related papers (2024-03-18T16:36:01Z) - USat: A Unified Self-Supervised Encoder for Multi-Sensor Satellite
Imagery [5.671254904219855]
We develop a new encoder architecture called USat that can input multi-spectral data from multiple sensors for self-supervised pre-training.
We integrate USat into a Masked Autoencoder (MAE) self-supervised pre-training procedure and find that a pre-trained USat outperforms state-of-the-art MAE models trained on remote sensing data.
arXiv Detail & Related papers (2023-12-02T19:17:04Z) - Agile gesture recognition for capacitive sensing devices: adapting
on-the-job [55.40855017016652]
We demonstrate a hand gesture recognition system that uses signals from capacitive sensors embedded into the etee hand controller.
The controller generates real-time signals from each of the wearer five fingers.
We use a machine learning technique to analyse the time series signals and identify three features that can represent 5 fingers within 500 ms.
arXiv Detail & Related papers (2023-05-12T17:24:02Z) - Think Twice before Driving: Towards Scalable Decoders for End-to-End
Autonomous Driving [74.28510044056706]
Existing methods usually adopt the decoupled encoder-decoder paradigm.
In this work, we aim to alleviate the problem by two principles.
We first predict a coarse-grained future position and action based on the encoder features.
Then, conditioned on the position and action, the future scene is imagined to check the ramification if we drive accordingly.
arXiv Detail & Related papers (2023-05-10T15:22:02Z) - Safe Output Feedback Motion Planning from Images via Learned Perception
Modules and Contraction Theory [6.950510860295866]
We present a class of uncertain control-affine nonlinear systems which guarantees runtime safety and goal reachability.
We train a perception system that seeks to invert a subset of the state from an observation, and estimate an upper bound on the perception error.
Next, we use contraction theory to design a stabilizing state feedback controller and a convergent dynamic state observer.
We derive a bound on the trajectory tracking error when this controller is subjected to errors in the dynamics and incorrect state estimates.
arXiv Detail & Related papers (2022-06-14T02:03:27Z) - Integral Migrating Pre-trained Transformer Encoder-decoders for Visual
Object Detection [78.2325219839805]
imTED improves the state-of-the-art of few-shot object detection by up to 7.6% AP.
Experiments on MS COCO dataset demonstrate that imTED consistently outperforms its counterparts by 2.8%.
arXiv Detail & Related papers (2022-05-19T15:11:20Z) - Neural Network Based Lidar Gesture Recognition for Realtime Robot
Teleoperation [0.0]
We propose a novel low-complexity lidar gesture recognition system for mobile robot control.
The system is lightweight and suitable for mobile robot control with limited computing power.
The use of lidar contributes to the robustness of the system, allowing it to operate in most outdoor conditions.
arXiv Detail & Related papers (2021-09-17T00:49:31Z) - Variational Autoencoders: A Harmonic Perspective [79.49579654743341]
We study Variational Autoencoders (VAEs) from the perspective of harmonic analysis.
We show that the encoder variance of a VAE controls the frequency content of the functions parameterised by the VAE encoder and decoder neural networks.
arXiv Detail & Related papers (2021-05-31T10:39:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.