A cyclical route linking fundamental mechanism and AI algorithm: An example from tuning Poisson's ratio in amorphous networks
- URL: http://arxiv.org/abs/2312.03404v3
- Date: Tue, 9 Jul 2024 23:45:34 GMT
- Title: A cyclical route linking fundamental mechanism and AI algorithm: An example from tuning Poisson's ratio in amorphous networks
- Authors: Changliang Zhu, Chenchao Fang, Zhipeng Jin, Baowen Li, Xiangying Shen, Lei Xu,
- Abstract summary: "AI for science" is a future trend in the development of scientific research.
This article uses the investigation into the relationship between extreme Poisson's ratio values and the structure of amorphous networks as a case study.
We employ a convolutional neural network, trained on the dynamical matrix instead of traditional image recognition, to predict the Poisson's ratio of amorphous networks with a much higher efficiency.
- Score: 2.2450275029638282
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: "AI for science" is widely recognized as a future trend in the development of scientific research. Currently, although machine learning algorithms have played a crucial role in scientific research with numerous successful cases, relatively few instances exist where AI assists researchers in uncovering the underlying physical mechanisms behind a certain phenomenon and subsequently using that mechanism to improve machine learning algorithms' efficiency. This article uses the investigation into the relationship between extreme Poisson's ratio values and the structure of amorphous networks as a case study to illustrate how machine learning methods can assist in revealing underlying physical mechanisms. Upon recognizing that the Poisson's ratio relies on the low-frequency vibrational modes of dynamical matrix, we can then employ a convolutional neural network, trained on the dynamical matrix instead of traditional image recognition, to predict the Poisson's ratio of amorphous networks with a much higher efficiency. Through this example, we aim to showcase the role that artificial intelligence can play in revealing fundamental physical mechanisms, which subsequently improves the machine learning algorithms significantly.
Related papers
- Demolition and Reinforcement of Memories in Spin-Glass-like Neural
Networks [0.0]
The aim of this thesis is to understand the effectiveness of Unlearning in both associative memory models and generative models.
The selection of structured data enables an associative memory model to retrieve concepts as attractors of a neural dynamics with considerable basins of attraction.
A novel regularization technique for Boltzmann Machines is presented, proving to outperform previously developed methods in learning hidden probability distributions from data-sets.
arXiv Detail & Related papers (2024-03-04T23:12:42Z) - Disentangling the Causes of Plasticity Loss in Neural Networks [55.23250269007988]
We show that loss of plasticity can be decomposed into multiple independent mechanisms.
We show that a combination of layer normalization and weight decay is highly effective at maintaining plasticity in a variety of synthetic nonstationary learning tasks.
arXiv Detail & Related papers (2024-02-29T00:02:33Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Towards a population-informed approach to the definition of data-driven
models for structural dynamics [0.0]
A population-based scheme is followed here and two different machine-learning algorithms from the meta-learning domain are used.
The algorithms seem to perform as intended and outperform a traditional machine-learning algorithm at approximating the quantities of interest.
arXiv Detail & Related papers (2023-07-19T09:45:41Z) - Mechanism of feature learning in deep fully connected networks and
kernel machines that recursively learn features [15.29093374895364]
We identify and characterize the mechanism through which deep fully connected neural networks learn gradient features.
Our ansatz sheds light on various deep learning phenomena including emergence of spurious features and simplicity biases.
To demonstrate the effectiveness of this feature learning mechanism, we use it to enable feature learning in classical, non-feature learning models.
arXiv Detail & Related papers (2022-12-28T15:50:58Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Using machine-learning modelling to understand macroscopic dynamics in a
system of coupled maps [0.0]
We consider a case study the macroscopic motion emerging from a system of globally coupled maps.
We build a coarse-grained Markov process for the macroscopic dynamics both with a machine learning approach and with a direct numerical computation of the transition probability of the coarse-grained process.
We are able to infer important information about the effective dimension of the attractor, the persistence of memory effects and the multi-scale structure of the dynamics.
arXiv Detail & Related papers (2020-11-08T15:38:12Z) - Vulnerability Under Adversarial Machine Learning: Bias or Variance? [77.30759061082085]
We investigate the effect of adversarial machine learning on the bias and variance of a trained deep neural network.
Our analysis sheds light on why the deep neural networks have poor performance under adversarial perturbation.
We introduce a new adversarial machine learning algorithm with lower computational complexity than well-known adversarial machine learning strategies.
arXiv Detail & Related papers (2020-08-01T00:58:54Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z) - Input-to-State Representation in linear reservoirs dynamics [15.491286626948881]
Reservoir computing is a popular approach to design recurrent neural networks.
The working principle of these networks is not fully understood.
A novel analysis of the dynamics of such networks is proposed.
arXiv Detail & Related papers (2020-03-24T00:14:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.