A Scale-Invariant Diagnostic Approach Towards Understanding Dynamics of Deep Neural Networks
- URL: http://arxiv.org/abs/2407.09585v1
- Date: Fri, 12 Jul 2024 11:54:05 GMT
- Title: A Scale-Invariant Diagnostic Approach Towards Understanding Dynamics of Deep Neural Networks
- Authors: Ambarish Moharil, Damian Tamburri, Indika Kumara, Willem-Jan Van Den Heuvel, Alireza Azarfar,
- Abstract summary: This paper introduces a scale-invariant methodology employing textitFractal Geometry to analyze and explain the nonlinear dynamics of connectionist systems.
We quantify fractal dimensions and textitroughness to deeply understand their dynamics and enhance the quality of textitintrinsic explanations.
- Score: 0.09320657506524146
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces a scale-invariant methodology employing \textit{Fractal Geometry} to analyze and explain the nonlinear dynamics of complex connectionist systems. By leveraging architectural self-similarity in Deep Neural Networks (DNNs), we quantify fractal dimensions and \textit{roughness} to deeply understand their dynamics and enhance the quality of \textit{intrinsic} explanations. Our approach integrates principles from Chaos Theory to improve visualizations of fractal evolution and utilizes a Graph-Based Neural Network for reconstructing network topology. This strategy aims at advancing the \textit{intrinsic} explainability of connectionist Artificial Intelligence (AI) systems.
Related papers
- Neural Symbolic Regression of Complex Network Dynamics [28.356824329954495]
We propose Physically Inspired Neural Dynamics Regression (PI-NDSR) to automatically learn the symbolic expression of dynamics.
We evaluate our method on synthetic datasets generated by various dynamics and real datasets on disease spreading.
arXiv Detail & Related papers (2024-10-15T02:02:30Z) - Dynamic neurons: A statistical physics approach for analyzing deep neural networks [1.9662978733004601]
We treat neurons as additional degrees of freedom in interactions, simplifying the structure of deep neural networks.
By utilizing translational symmetry and renormalization group transformations, we can analyze critical phenomena.
This approach may open new avenues for studying deep neural networks using statistical physics.
arXiv Detail & Related papers (2024-10-01T04:39:04Z) - Exploiting Chaotic Dynamics as Deep Neural Networks [1.9282110216621833]
We show that the essence of chaos can be found in various state-of-the-art deep neural networks.
Our framework presents superior results in terms of accuracy, convergence speed, and efficiency.
This study offers a new path for the integration of chaos, which has long been overlooked in information processing.
arXiv Detail & Related papers (2024-05-29T22:03:23Z) - Nonlinear classification of neural manifolds with contextual information [6.292933471495322]
manifold capacity has emerged as a promising framework linking population geometry to the separability of neural manifold.
We propose a theoretical framework that overcomes this limitation by leveraging contextual input information.
Our framework's increased expressivity captures representation untanglement in deep networks at early stages of the layer hierarchy, previously inaccessible to analysis.
arXiv Detail & Related papers (2024-05-10T23:37:31Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - A Novel Convolutional Neural Network Architecture with a Continuous Symmetry [10.854440554663576]
This paper introduces a new Convolutional Neural Network (ConvNet) architecture inspired by a class of partial differential equations (PDEs)
With comparable performance on the image classification task, it allows for the modification of the weights via a continuous group of symmetry.
arXiv Detail & Related papers (2023-08-03T08:50:48Z) - Reparameterization through Spatial Gradient Scaling [69.27487006953852]
Reparameterization aims to improve the generalization of deep neural networks by transforming convolutional layers into equivalent multi-branched structures during training.
We present a novel spatial gradient scaling method to redistribute learning focus among weights in convolutional networks.
arXiv Detail & Related papers (2023-03-05T17:57:33Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z) - Hyperbolic Neural Networks++ [66.16106727715061]
We generalize the fundamental components of neural networks in a single hyperbolic geometry model, namely, the Poincar'e ball model.
Experiments show the superior parameter efficiency of our methods compared to conventional hyperbolic components, and stability and outperformance over their Euclidean counterparts.
arXiv Detail & Related papers (2020-06-15T08:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.