Intrinsic and extrinsic deep learning on manifolds
- URL: http://arxiv.org/abs/2302.08606v1
- Date: Thu, 16 Feb 2023 22:10:38 GMT
- Title: Intrinsic and extrinsic deep learning on manifolds
- Authors: Yihao Fang, Ilsang Ohn, Vijay Gupta, Lizhen Lin
- Abstract summary: intrinsic deep neural networks (iDNNs) incorporate the underlying intrinsic geometry of manifold via exponential and log maps.
We prove that the empirical risk of the empirical risk minimizers (ERM) of eDNNs and iDNNs converge in optimal rates.
The eDNNs framework is simple and easy to compute, while the iDNNs framework is accurate and fast converging.
- Score: 2.207988653560308
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose extrinsic and intrinsic deep neural network architectures as
general frameworks for deep learning on manifolds. Specifically, extrinsic deep
neural networks (eDNNs) preserve geometric features on manifolds by utilizing
an equivariant embedding from the manifold to its image in the Euclidean space.
Moreover, intrinsic deep neural networks (iDNNs) incorporate the underlying
intrinsic geometry of manifolds via exponential and log maps with respect to a
Riemannian structure. Consequently, we prove that the empirical risk of the
empirical risk minimizers (ERM) of eDNNs and iDNNs converge in optimal rates.
Overall, The eDNNs framework is simple and easy to compute, while the iDNNs
framework is accurate and fast converging. To demonstrate the utilities of our
framework, various simulation studies, and real data analyses are presented
with eDNNs and iDNNs.
Related papers
- Flexible and Scalable Deep Dendritic Spiking Neural Networks with Multiple Nonlinear Branching [39.664692909673086]
We propose the dendritic spiking neuron (DendSN) incorporating multiple dendritic branches with nonlinear dynamics.
Compared to the point spiking neurons, DendSN exhibits significantly higher expressivity.
Our work demonstrates the possibility of training bio-plausible dendritic SNNs with depths and scales comparable to traditional point SNNs.
arXiv Detail & Related papers (2024-12-09T10:15:46Z) - Direct Training High-Performance Deep Spiking Neural Networks: A Review of Theories and Methods [33.377770671553336]
Spiking neural networks (SNNs) offer a promising energy-efficient alternative to artificial neural networks (ANNs)
In this paper, we provide a new perspective to summarize the theories and methods for training deep SNNs with high performance.
arXiv Detail & Related papers (2024-05-06T09:58:54Z) - Riemannian Residual Neural Networks [58.925132597945634]
We show how to extend the residual neural network (ResNet)
ResNets have become ubiquitous in machine learning due to their beneficial learning properties, excellent empirical results, and easy-to-incorporate nature when building varied neural networks.
arXiv Detail & Related papers (2023-10-16T02:12:32Z) - Interpretable Neural Networks with Random Constructive Algorithm [3.1200894334384954]
This paper introduces an Interpretable Neural Network (INN) incorporating spatial information to tackle the opaque parameterization process of random weighted neural networks.
It devises a geometric relationship strategy using a pool of candidate nodes and established relationships to select node parameters conducive to network convergence.
arXiv Detail & Related papers (2023-07-01T01:07:20Z) - Multi-scale Evolutionary Neural Architecture Search for Deep Spiking
Neural Networks [7.271032282434803]
We propose a Multi-Scale Evolutionary Neural Architecture Search (MSE-NAS) for Spiking Neural Networks (SNNs)
MSE-NAS evolves individual neuron operation, self-organized integration of multiple circuit motifs, and global connectivity across motifs through a brain-inspired indirect evaluation function, Representational Dissimilarity Matrices (RDMs)
The proposed algorithm achieves state-of-the-art (SOTA) performance with shorter simulation steps on static datasets and neuromorphic datasets.
arXiv Detail & Related papers (2023-04-21T05:36:37Z) - On the Intrinsic Structures of Spiking Neural Networks [66.57589494713515]
Recent years have emerged a surge of interest in SNNs owing to their remarkable potential to handle time-dependent and event-driven data.
There has been a dearth of comprehensive studies examining the impact of intrinsic structures within spiking computations.
This work delves deep into the intrinsic structures of SNNs, by elucidating their influence on the expressivity of SNNs.
arXiv Detail & Related papers (2022-06-21T09:42:30Z) - Masked Bayesian Neural Networks : Computation and Optimality [1.3649494534428745]
We propose a novel sparse Bayesian neural network (BNN) which searches a good deep neural network with an appropriate complexity.
We employ the masking variables at each node which can turn off some nodes according to the posterior distribution to yield a nodewise sparse DNN.
By analyzing several benchmark datasets, we illustrate that the proposed BNN performs well compared to other existing methods.
arXiv Detail & Related papers (2022-06-02T02:59:55Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Extended critical regimes of deep neural networks [0.0]
We show that heavy-tailed weights enable the emergence of an extended critical regime without fine-tuning parameters.
In this extended critical regime, DNNs exhibit rich and complex propagation dynamics across layers.
We provide a theoretical guide for the design of efficient neural architectures.
arXiv Detail & Related papers (2022-03-24T10:15:50Z) - Fusing the Old with the New: Learning Relative Camera Pose with
Geometry-Guided Uncertainty [91.0564497403256]
We present a novel framework that involves probabilistic fusion between the two families of predictions during network training.
Our network features a self-attention graph neural network, which drives the learning by enforcing strong interactions between different correspondences.
We propose motion parmeterizations suitable for learning and show that our method achieves state-of-the-art performance on the challenging DeMoN and ScanNet datasets.
arXiv Detail & Related papers (2021-04-16T17:59:06Z) - Inter-layer Information Similarity Assessment of Deep Neural Networks
Via Topological Similarity and Persistence Analysis of Data Neighbour
Dynamics [93.4221402881609]
The quantitative analysis of information structure through a deep neural network (DNN) can unveil new insights into the theoretical performance of DNN architectures.
Inspired by both LS and ID strategies for quantitative information structure analysis, we introduce two novel complimentary methods for inter-layer information similarity assessment.
We demonstrate their efficacy in this study by performing analysis on a deep convolutional neural network architecture on image data.
arXiv Detail & Related papers (2020-12-07T15:34:58Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.