Neural Network Complexity of Chaos and Turbulence
- URL: http://arxiv.org/abs/2211.15382v2
- Date: Thu, 20 Jul 2023 12:18:49 GMT
- Title: Neural Network Complexity of Chaos and Turbulence
- Authors: Tim Whittaker, Romuald A. Janik, Yaron Oz
- Abstract summary: We consider the relative complexity of chaos and turbulence from the perspective of deep neural networks.
We analyze a set of classification problems, where the network has to distinguish images of fluid profiles in the turbulent regime.
We quantify the complexity of the computation performed by the network via the intrinsic dimensionality of the internal feature representations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Chaos and turbulence are complex physical phenomena, yet a precise definition
of the complexity measure that quantifies them is still lacking. In this work
we consider the relative complexity of chaos and turbulence from the
perspective of deep neural networks. We analyze a set of classification
problems, where the network has to distinguish images of fluid profiles in the
turbulent regime from other classes of images such as fluid profiles in the
chaotic regime, various constructions of noise and real world images. We
analyze incompressible as well as weakly compressible fluid flows. We quantify
the complexity of the computation performed by the network via the intrinsic
dimensionality of the internal feature representations, and calculate the
effective number of independent features which the network uses in order to
distinguish between classes. In addition to providing a numerical estimate of
the complexity of the computation, the measure also characterizes the neural
network processing at intermediate and final stages. We construct adversarial
examples and use them to identify the two point correlation spectra for the
chaotic and turbulent vorticity as the feature used by the network for
classification.
Related papers
- Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Spectral complexity of deep neural networks [2.099922236065961]
We use the angular power spectrum of the limiting field to characterize the complexity of the network architecture.
On this basis, we classify neural networks as low-disorder, sparse, or high-disorder.
We show how this classification highlights a number of distinct features for standard activation functions, and in particular, sparsity properties of ReLU networks.
arXiv Detail & Related papers (2024-05-15T17:55:05Z) - Image segmentation with traveling waves in an exactly solvable recurrent
neural network [71.74150501418039]
We show that a recurrent neural network can effectively divide an image into groups according to a scene's structural characteristics.
We present a precise description of the mechanism underlying object segmentation in this network.
We then demonstrate a simple algorithm for object segmentation that generalizes across inputs ranging from simple geometric objects in grayscale images to natural images.
arXiv Detail & Related papers (2023-11-28T16:46:44Z) - Structural Balance and Random Walks on Complex Networks with Complex
Weights [13.654842079699458]
Recent years have seen an increasing interest to extend the tools of network science when the weight of edges are complex numbers.
Here, we focus on the case when the weight matrix is Hermitian, a reasonable assumption in many applications.
We introduce a classification of complex-weighted networks based on the notion of structural balance, and illustrate the shared spectral properties within each type.
arXiv Detail & Related papers (2023-07-04T16:39:52Z) - Fluctuation based interpretable analysis scheme for quantum many-body
snapshots [0.0]
Microscopically understanding and classifying phases of matter is at the heart of strongly-correlated quantum physics.
Here, we combine confusion learning with correlation convolutional neural networks, which yields fully interpretable phase detection.
Our work opens new directions in interpretable quantum image processing being sensible to long-range order.
arXiv Detail & Related papers (2023-04-12T17:59:59Z) - Rank Diminishing in Deep Neural Networks [71.03777954670323]
Rank of neural networks measures information flowing across layers.
It is an instance of a key structural condition that applies across broad domains of machine learning.
For neural networks, however, the intrinsic mechanism that yields low-rank structures remains vague and unclear.
arXiv Detail & Related papers (2022-06-13T12:03:32Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Emergent complex quantum networks in continuous-variables non-Gaussian
states [0.0]
We study a class of continuous-variable quantum states that present both multipartite entanglement and non-Gaussian statistics.
In particular, the states are built from an initial imprinted cluster state created via Gaussian entangling operations.
arXiv Detail & Related papers (2020-12-31T13:58:37Z) - Problems of representation of electrocardiograms in convolutional neural
networks [58.720142291102135]
We show that these problems are systemic in nature.
They are due to how convolutional networks work with composite objects, parts of which are not fixed rigidly, but have significant mobility.
arXiv Detail & Related papers (2020-12-01T14:02:06Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Complexity for deep neural networks and other characteristics of deep
feature representations [0.0]
We define a notion of complexity, which quantifies the nonlinearity of the computation of a neural network.
We investigate these observables both for trained networks as well as explore their dynamics during training.
arXiv Detail & Related papers (2020-06-08T17:59:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.