Hybrid Quantum-inspired Resnet and Densenet for Pattern Recognition with
Completeness Analysis
- URL: http://arxiv.org/abs/2403.05754v1
- Date: Sat, 9 Mar 2024 01:34:26 GMT
- Title: Hybrid Quantum-inspired Resnet and Densenet for Pattern Recognition with
Completeness Analysis
- Authors: Andi Chen, Hua-Lei Yin, Zeng-Bing Chen, Shengjun Wu
- Abstract summary: Post-Moore era has spurred the development of quantum-inspired neural networks with outstanding potentials.
We propose two hybrid quantum-inspired neural networks which are rooted in residual and dense connections.
Comparative analyses reveal that our hybrid models with lower parameter complexity not only match the generalization power of pure classical models, but also outperform them notably in resistance to parameter attacks with various asymmetric noises.
- Score: 1.1470070927586018
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the contemporary digital technology approaching, deep neural networks
are emerging as the foundational algorithm of the artificial intelligence boom.
Whereas, the evolving social demands have been emphasizing the necessity of
novel methodologies to substitute traditional neural networks. Concurrently,
the advent of the post-Moore era has spurred the development of
quantum-inspired neural networks with outstanding potentials at certain
circumstances. Nonetheless, a definitive evaluating system with detailed
metrics is tremendously vital and indispensable owing to the vague indicators
in comparison between the novel and traditional deep learning models at
present. Hence, to improve and evaluate the performances of the novel neural
networks more comprehensively in complex and unpredictable environments, we
propose two hybrid quantum-inspired neural networks which are rooted in
residual and dense connections respectively for pattern recognitions with
completeness representation theory for model assessment. Comparative analyses
against pure classical models with detailed frameworks reveal that our hybrid
models with lower parameter complexity not only match the generalization power
of pure classical models, but also outperform them notably in resistance to
parameter attacks with various asymmetric noises. Moreover, our hybrid models
indicate unique superiority to prevent gradient explosion problems through
theoretical argumentation. Eventually, We elaborate on the application
scenarios where our hybrid models are applicable and efficient, which paves the
way for their industrialization and commercialization.
Related papers
- Neural Residual Diffusion Models for Deep Scalable Vision Generation [17.931568104324985]
We propose a unified and massively scalable Neural Residual Diffusion Models framework (Neural-RDM)
The proposed neural residual models obtain state-of-the-art scores on image's and video's generative benchmarks.
arXiv Detail & Related papers (2024-06-19T04:57:18Z) - A method for quantifying the generalization capabilities of generative models for solving Ising models [5.699467840225041]
We use a Hamming distance regularizer to quantify the generalization capabilities of various network architectures combined with VAN.
We conduct numerical experiments on several network architectures combined with VAN, including feed-forward neural networks, recurrent neural networks, and graph neural networks.
Our method is of great significance for assisting in the Neural Architecture Search field of searching for the optimal network architectures when solving large-scale Ising models.
arXiv Detail & Related papers (2024-05-06T12:58:48Z) - Bayesian sparsification for deep neural networks with Bayesian model
reduction [0.6144680854063939]
We advocate for the use of Bayesian model reduction (BMR) as a more efficient alternative for pruning of model weights.
BMR allows a post-hoc elimination of redundant model weights based on the posterior estimates under a straightforward (non-hierarchical) generative model.
We illustrate the potential of BMR across various deep learning architectures, from classical networks like LeNet to modern frameworks such as Vision and Transformers-Mixers.
arXiv Detail & Related papers (2023-09-21T14:10:47Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Generalization and Estimation Error Bounds for Model-based Neural
Networks [78.88759757988761]
We show that the generalization abilities of model-based networks for sparse recovery outperform those of regular ReLU networks.
We derive practical design rules that allow to construct model-based networks with guaranteed high generalization.
arXiv Detail & Related papers (2023-04-19T16:39:44Z) - High Accuracy Uncertainty-Aware Interatomic Force Modeling with
Equivariant Bayesian Neural Networks [3.028098724882708]
We introduce a new Monte Carlo Markov chain sampling algorithm for learning interatomic forces.
In addition, we introduce a new neural network model based on the NequIP architecture and demonstrate that, when combined with our novel sampling algorithm, we obtain predictions with state-of-the-art accuracy as well as a good measure of uncertainty.
arXiv Detail & Related papers (2023-04-05T10:39:38Z) - Maximum entropy exploration in contextual bandits with neural networks
and energy based models [63.872634680339644]
We present two classes of models, one with neural networks as reward estimators, and the other with energy based models.
We show that both techniques outperform well-known standard algorithms, where energy based models have the best overall performance.
This provides practitioners with new techniques that perform well in static and dynamic settings, and are particularly well suited to non-linear scenarios with continuous action spaces.
arXiv Detail & Related papers (2022-10-12T15:09:45Z) - Physically constrained neural networks to solve the inverse problem for
neuron models [0.29005223064604074]
Systems biology and systems neurophysiology are powerful tools for a number of key applications in the biomedical sciences.
Recent developments in the field of deep neural networks have demonstrated the possibility of formulating nonlinear, universal approximators.
arXiv Detail & Related papers (2022-09-24T12:51:15Z) - Sparse Flows: Pruning Continuous-depth Models [107.98191032466544]
We show that pruning improves generalization for neural ODEs in generative modeling.
We also show that pruning finds minimal and efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy.
arXiv Detail & Related papers (2021-06-24T01:40:17Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z) - Hyperbolic Neural Networks++ [66.16106727715061]
We generalize the fundamental components of neural networks in a single hyperbolic geometry model, namely, the Poincar'e ball model.
Experiments show the superior parameter efficiency of our methods compared to conventional hyperbolic components, and stability and outperformance over their Euclidean counterparts.
arXiv Detail & Related papers (2020-06-15T08:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.