Machine-learning-assisted construction of appropriate rotating frame
- URL: http://arxiv.org/abs/2211.15269v4
- Date: Tue, 11 Apr 2023 11:23:35 GMT
- Title: Machine-learning-assisted construction of appropriate rotating frame
- Authors: Yoshihiro Michishita
- Abstract summary: We propose methods to use machine learning to find the analytical methods.
We demonstrate that the recurrent neural networks can derive'' the Floquet-Magnus expansion.
We also argue that this method is also applicable to finding other theoretical frameworks in other systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning with neural networks is now becoming a more and more
powerful tool for various tasks, such as natural language processing, image
recognition, winning the game, and even for the issues of physics. Although
there are many studies on the application of machine learning to numerical
calculation and the assistance of experimental detection, the methods of
applying machine learning to find the analytical method are poorly studied. In
this letter, we propose methods to use machine learning to find the analytical
methods. We demonstrate that the recurrent neural networks can ``derive'' the
Floquet-Magnus expansion just by inputting the time-periodic Hamiltonian to the
neural networks, and derive the appropriate rotating frame in the
periodically-driven system. We also argue that this method is also applicable
to finding other theoretical frameworks in other systems.
Related papers
- A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - Detecting Moving Objects With Machine Learning [0.0]
This chapter presents a review of the use of machine learning techniques to find moving objects in astronomical imagery.
I discuss various pitfalls with the use of machine learning techniques, including a discussion on the important issue of overfitting.
arXiv Detail & Related papers (2024-05-10T00:13:39Z) - Application-Driven Innovation in Machine Learning [56.85396167616353]
We describe the paradigm of application-driven research in machine learning.
We show how this approach can productively synergize with methods-driven work.
Despite these benefits, we find that reviewing, hiring, and teaching practices in machine learning often hold back application-driven innovation.
arXiv Detail & Related papers (2024-03-26T04:59:27Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Deep Learning Meets Sparse Regularization: A Signal Processing
Perspective [17.12783792226575]
We present a mathematical framework that characterizes the functional properties of neural networks that are trained to fit to data.
Key mathematical tools which support this framework include transform-domain sparse regularization, the Radon transform of computed tomography, and approximation theory.
This framework explains the effect of weight decay regularization in neural network training, the use of skip connections and low-rank weight matrices in network architectures, the role of sparsity in neural networks, and explains why neural networks can perform well in high-dimensional problems.
arXiv Detail & Related papers (2023-01-23T17:16:21Z) - The Physics of Machine Learning: An Intuitive Introduction for the
Physical Scientist [0.0]
This article is intended for physical scientists who wish to gain deeper insights into machine learning algorithms.
We begin with a review of two energy-based machine learning algorithms, Hopfield networks and Boltzmann machines, and their connection to the Ising model.
We then delve into additional, more "practical," machine learning architectures including feedforward neural networks, convolutional neural networks, and autoencoders.
arXiv Detail & Related papers (2021-11-27T15:12:42Z) - Measuring and modeling the motor system with machine learning [117.44028458220427]
The utility of machine learning in understanding the motor system is promising a revolution in how to collect, measure, and analyze data.
We discuss the growing use of machine learning: from pose estimation, kinematic analyses, dimensionality reduction, and closed-loop feedback, to its use in understanding neural correlates and untangling sensorimotor systems.
arXiv Detail & Related papers (2021-03-22T12:42:16Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.