Deep learning for the design of non-Hermitian topolectrical circuits
- URL: http://arxiv.org/abs/2402.09978v1
- Date: Thu, 15 Feb 2024 14:41:55 GMT
- Title: Deep learning for the design of non-Hermitian topolectrical circuits
- Authors: Xi Chen, Jinyang Sun, Xiumei Wang, Hengxuan Jiang, Dandan Zhu, and
Xingping Zhou
- Abstract summary: We introduce several algorithms with multi-layer perceptron (MLP), and convolutional neural network (CNN) in the field of deep learning, to predict the winding of eigenvalues non-Hermitian Hamiltonians.
Our results demonstrate the effectiveness of the deep learning network in capturing the global topological characteristics of a non-Hermitian system based on training data.
- Score: 8.960003862907877
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Non-Hermitian topological phases can produce some remarkable properties,
compared with their Hermitian counterpart, such as the breakdown of
conventional bulk-boundary correspondence and the non-Hermitian topological
edge mode. Here, we introduce several algorithms with multi-layer perceptron
(MLP), and convolutional neural network (CNN) in the field of deep learning, to
predict the winding of eigenvalues non-Hermitian Hamiltonians. Subsequently, we
use the smallest module of the periodic circuit as one unit to construct
high-dimensional circuit data features. Further, we use the Dense Convolutional
Network (DenseNet), a type of convolutional neural network that utilizes dense
connections between layers to design a non-Hermitian topolectrical Chern
circuit, as the DenseNet algorithm is more suitable for processing
high-dimensional data. Our results demonstrate the effectiveness of the deep
learning network in capturing the global topological characteristics of a
non-Hermitian system based on training data.
Related papers
- Convergence Analysis for Deep Sparse Coding via Convolutional Neural Networks [7.956678963695681]
We introduce a novel class of Deep Sparse Coding (DSC) models.
We derive convergence rates for CNNs in their ability to extract sparse features.
Inspired by the strong connection between sparse coding and CNNs, we explore training strategies to encourage neural networks to learn more sparse features.
arXiv Detail & Related papers (2024-08-10T12:43:55Z) - Deep Neural Networks with Symplectic Preservation Properties [10.700252603950107]
We propose a deep neural network architecture designed such that its output forms an invertible symplectomorphism of the input.
This design draws an analogy to the real-valued non-preserving-volume (real NVP) method used in normalizing flow techniques.
arXiv Detail & Related papers (2024-06-29T03:25:54Z) - Deep Learning as Ricci Flow [38.27936710747996]
Deep neural networks (DNNs) are powerful tools for approximating the distribution of complex data.
We show that the transformations performed by DNNs during classification tasks have parallels to those expected under Hamilton's Ricci flow.
Our findings motivate the use of tools from differential and discrete geometry to the problem of explainability in deep learning.
arXiv Detail & Related papers (2024-04-22T15:12:47Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - Quantum-inspired event reconstruction with Tensor Networks: Matrix
Product States [0.0]
We show that Networks are ideal vehicles to connect quantum mechanical concepts to machine learning techniques.
We show that entanglement entropy can be used to interpret what a network learns.
arXiv Detail & Related papers (2021-06-15T18:00:02Z) - Learning Structures for Deep Neural Networks [99.8331363309895]
We propose to adopt the efficient coding principle, rooted in information theory and developed in computational neuroscience.
We show that sparse coding can effectively maximize the entropy of the output signals.
Our experiments on a public image classification dataset demonstrate that using the structure learned from scratch by our proposed algorithm, one can achieve a classification accuracy comparable to the best expert-designed structure.
arXiv Detail & Related papers (2021-05-27T12:27:24Z) - Random Features for the Neural Tangent Kernel [57.132634274795066]
We propose an efficient feature map construction of the Neural Tangent Kernel (NTK) of fully-connected ReLU network.
We show that dimension of the resulting features is much smaller than other baseline feature map constructions to achieve comparable error bounds both in theory and practice.
arXiv Detail & Related papers (2021-04-03T09:08:12Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z) - A Genetic Algorithm based Kernel-size Selection Approach for a
Multi-column Convolutional Neural Network [11.040847116812046]
We introduce a genetic algorithm-based technique to reduce the efforts of finding the optimal combination of a hyper-parameter ( Kernel size) of a convolutional neural network-based architecture.
The method is evaluated on three popular datasets of different handwritten Bangla characters and digits.
arXiv Detail & Related papers (2019-12-28T05:37:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.