Deep Autoencoder Model Construction Based on Pytorch
- URL: http://arxiv.org/abs/2208.08231v1
- Date: Wed, 17 Aug 2022 11:19:05 GMT
- Title: Deep Autoencoder Model Construction Based on Pytorch
- Authors: Junan Pan, Zhihao Zhao
- Abstract summary: This paper introduces the idea of Pytorch into the auto-encoder, and randomly clears the input weights connected to the hidden layer neurons with a certain probability.
The new algorithm effectively solves the problem of possible overfitting of the model and improves the accuracy of image classification.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a deep autoencoder model based on Pytorch. This algorithm
introduces the idea of Pytorch into the auto-encoder, and randomly clears the
input weights connected to the hidden layer neurons with a certain probability,
so as to achieve the effect of sparse network, which is similar to the starting
point of the sparse auto-encoder. The new algorithm effectively solves the
problem of possible overfitting of the model and improves the accuracy of image
classification. Finally, the experiment is carried out, and the experimental
results are compared with ELM, RELM, AE, SAE, DAE.
Related papers
- Self-Distilled Masked Auto-Encoders are Efficient Video Anomaly
Detectors [117.61449210940955]
We propose an efficient abnormal event detection model based on a lightweight masked auto-encoder (AE) applied at the video frame level.
We introduce an approach to weight tokens based on motion gradients, thus shifting the focus from the static background scene to the foreground objects.
We generate synthetic abnormal events to augment the training videos, and task the masked AE model to jointly reconstruct the original frames.
arXiv Detail & Related papers (2023-06-21T06:18:05Z) - PyEPO: A PyTorch-based End-to-End Predict-then-Optimize Library for
Linear and Integer Programming [9.764407462807588]
We present the PyEPO package, a PyTorchbased end-to-end predict-then-optimize library in Python.
PyEPO is the first such generic tool for linear and integer programming with predicted objective function coefficients.
arXiv Detail & Related papers (2022-06-28T18:33:55Z) - TorchNTK: A Library for Calculation of Neural Tangent Kernels of PyTorch
Models [16.30276204466139]
We introduce torchNTK, a python library to calculate the empirical neural tangent kernel (NTK) of neural network models in the PyTorch framework.
A feature of the library is that we expose the user to layerwise NTK components, and show that in some regimes a layerwise calculation is more memory efficient.
arXiv Detail & Related papers (2022-05-24T21:27:58Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - PyLightcurve-torch: a transit modelling package for deep learning
applications in PyTorch [0.0]
We present a new open source python package, based on PyLightcurve and PyTorch.
It is tailored for efficient computation and automatic differentiation of exoplanetary transits.
arXiv Detail & Related papers (2020-11-03T22:05:41Z) - AutoPruning for Deep Neural Network with Dynamic Channel Masking [28.018077874687343]
We propose a learning based auto pruning algorithm for deep neural network.
A two objectives' problem that aims for the the weights and the best channels for each layer is first formulated.
An alternative optimization approach is then proposed to derive the optimal channel numbers and weights simultaneously.
arXiv Detail & Related papers (2020-10-22T20:12:46Z) - Probabilistic Object Classification using CNN ML-MAP layers [0.0]
We introduce a CNN probabilistic approach based on distributions calculated in the network's Logit layer.
The new approach shows promising performance compared to SoftMax.
arXiv Detail & Related papers (2020-05-29T13:34:15Z) - Few-Shot Open-Set Recognition using Meta-Learning [72.15940446408824]
The problem of open-set recognition is considered.
A new oPen sEt mEta LEaRning (PEELER) algorithm is introduced.
arXiv Detail & Related papers (2020-05-27T23:49:26Z) - DHP: Differentiable Meta Pruning via HyperNetworks [158.69345612783198]
This paper introduces a differentiable pruning method via hypernetworks for automatic network pruning.
Latent vectors control the output channels of the convolutional layers in the backbone network and act as a handle for the pruning of the layers.
Experiments are conducted on various networks for image classification, single image super-resolution, and denoising.
arXiv Detail & Related papers (2020-03-30T17:59:18Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.