Machine Learning with Chaotic Strange Attractors
- URL: http://arxiv.org/abs/2309.13361v1
- Date: Sat, 23 Sep 2023 12:54:38 GMT
- Title: Machine Learning with Chaotic Strange Attractors
- Authors: Bahad{\i}r Utku Kesgin and U\u{g}ur Te\u{g}in
- Abstract summary: We present an analog computing method that harnesses chaotic nonlinear attractors to perform machine learning tasks with low power consumption.
Inspired by neuromorphic computing, our model is a programmable, versatile, and generalized platform for machine learning tasks.
When deployed as a simple analog device, it only requires milliwatt-scale power levels while being on par with current machine learning techniques.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning studies need colossal power to process massive datasets and
train neural networks to reach high accuracies, which have become gradually
unsustainable. Limited by the von Neumann bottleneck, current computing
architectures and methods fuel this high power consumption. Here, we present an
analog computing method that harnesses chaotic nonlinear attractors to perform
machine learning tasks with low power consumption. Inspired by neuromorphic
computing, our model is a programmable, versatile, and generalized platform for
machine learning tasks. Our mode provides exceptional performance in clustering
by utilizing chaotic attractors' nonlinear mapping and sensitivity to initial
conditions. When deployed as a simple analog device, it only requires
milliwatt-scale power levels while being on par with current machine learning
techniques. We demonstrate low errors and high accuracies with our model for
regression and classification-based learning tasks.
Related papers
- Obtaining physical layer data of latest generation networks for investigating adversary attacks [0.0]
Machine learning can be used to optimize the functions of latest generation data networks such as 5G and 6G.
adversarial measures that manipulate the behaviour of intelligent machine learning models are becoming a major concern.
A simulation model is proposed that works in conjunction with machine learning applications.
arXiv Detail & Related papers (2024-05-02T06:03:27Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Machine Learning Without a Processor: Emergent Learning in a Nonlinear Electronic Metamaterial [0.6597195879147557]
We introduce a nonlinear learning metamaterial -- an analog electronic network made of self-adjusting nonlinear resistive elements based on transistors.
We demonstrate that the system learns tasks unachievable in linear systems, including XOR and nonlinear regression, without a computer.
This suggests enormous potential for fast, low-power computing in edge systems like sensors, robotic controllers, and medical devices.
arXiv Detail & Related papers (2023-11-01T14:16:37Z) - Learning-based adaption of robotic friction models [48.453527255659296]
We introduce a novel approach to adapt an existing friction model to new dynamics using as little data as possible.
Our proposed estimator outperforms the conventional model-based approach and the base neural network significantly.
Our method does not rely on data with external load during training, eliminating the need for external torque sensors.
arXiv Detail & Related papers (2023-10-25T14:50:15Z) - Controlling dynamical systems to complex target states using machine
learning: next-generation vs. classical reservoir computing [68.8204255655161]
Controlling nonlinear dynamical systems using machine learning allows to drive systems into simple behavior like periodicity but also to more complex arbitrary dynamics.
We show first that classical reservoir computing excels at this task.
In a next step, we compare those results based on different amounts of training data to an alternative setup, where next-generation reservoir computing is used instead.
It turns out that while delivering comparable performance for usual amounts of training data, next-generation RC significantly outperforms in situations where only very limited data is available.
arXiv Detail & Related papers (2023-07-14T07:05:17Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - Advancing Reacting Flow Simulations with Data-Driven Models [50.9598607067535]
Key to effective use of machine learning tools in multi-physics problems is to couple them to physical and computer models.
The present chapter reviews some of the open opportunities for the application of data-driven reduced-order modeling of combustion systems.
arXiv Detail & Related papers (2022-09-05T16:48:34Z) - Continual Learning with Transformers for Image Classification [12.028617058465333]
In computer vision, neural network models struggle to continually learn new concepts without forgetting what has been learnt in the past.
We develop a solution called Adaptive Distillation of Adapters (ADA), which is developed to perform continual learning.
We empirically demonstrate on different classification tasks that this method maintains a good predictive performance without retraining the model.
arXiv Detail & Related papers (2022-06-28T15:30:10Z) - Uncertainty Estimation in Machine Learning [0.0]
In machine learning the model complexity and severe nonlinearity become serious obstacles to uncertainty evaluation.
The latest example of a pre-trained model is the Generative Pre-trained Transformer 3 with hundreds of billions of parameters and a half-terabyte training dataset.
arXiv Detail & Related papers (2022-06-03T16:11:11Z) - Nonlinear Autoregression with Convergent Dynamics on Novel Computational
Platforms [0.0]
Reservoir computing exploits nonlinear dynamical systems for temporal information processing.
This paper introduces reservoir computers with output feedback as stationary and ergodic infinite-order nonlinear autoregressive models.
arXiv Detail & Related papers (2021-08-18T07:01:16Z) - One-step regression and classification with crosspoint resistive memory
arrays [62.997667081978825]
High speed, low energy computing machines are in demand to enable real-time artificial intelligence at the edge.
One-step learning is supported by simulations of the prediction of the cost of a house in Boston and the training of a 2-layer neural network for MNIST digit recognition.
Results are all obtained in one computational step, thanks to the physical, parallel, and analog computing within the crosspoint array.
arXiv Detail & Related papers (2020-05-05T08:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.