Characterization and recognition of handwritten digits using Julia
- URL: http://arxiv.org/abs/2102.11994v1
- Date: Wed, 24 Feb 2021 00:30:41 GMT
- Title: Characterization and recognition of handwritten digits using Julia
- Authors: M. A. Jishan, M. S. Alam, Afrida Islam, I. R. Mazumder, K. R. Mahmud
and A. K. Al Azad
- Abstract summary: We implement a hybrid neural network model that is capable of recognizing the digit of MNISTdataset.
The proposed neural model network can extract features from the image and recognize the features in the layer by layer.
It also can recognize the auto-encoding system and the variational auto-encoding system of the MNIST dataset.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic image and digit recognition is a computationally challenging task
for image processing and pattern recognition, requiring an adequate
appreciation of the syntactic and semantic importance of the image for the
identification ofthe handwritten digits. Image and Pattern Recognition has been
identified as one of the driving forces in the research areas because of its
shifting of different types of applications, such as safety frameworks,
clinical frameworks, diversion, and so on.In this study, for recognition, we
implemented a hybrid neural network model that is capable of recognizing the
digit of MNISTdataset and achieved a remarkable result. The proposed neural
model network can extract features from the image and recognize the features in
the layer by layer. To expand, it is so important for the neural network to
recognize how the proposed modelcan work in each layer, how it can generate
output, and so on. Besides, it also can recognize the auto-encoding system and
the variational auto-encoding system of the MNIST dataset. This study will
explore those issues that are discussed above, and the explanation for them,
and how this phenomenon can be overcome.
Related papers
- Pre-processing and Compression: Understanding Hidden Representation Refinement Across Imaging Domains via Intrinsic Dimension [1.7495213911983414]
We show that medical image models peak in representation ID earlier in the network.
We also find a strong correlation of this peak representation ID with the ID of the data in its input space.
arXiv Detail & Related papers (2024-08-15T18:54:31Z) - Research on Image Recognition Technology Based on Multimodal Deep Learning [24.259653149898167]
This project investigates the human multi-modal behavior identification algorithm utilizing deep neural networks.
The performance of the suggested algorithm was evaluated using the MSR3D data set.
arXiv Detail & Related papers (2024-05-06T01:05:21Z) - Alleviating Catastrophic Forgetting in Facial Expression Recognition with Emotion-Centered Models [49.3179290313959]
The proposed method, emotion-centered generative replay (ECgr), tackles this challenge by integrating synthetic images from generative adversarial networks.
ECgr incorporates a quality assurance algorithm to ensure the fidelity of generated images.
The experimental results on four diverse facial expression datasets demonstrate that incorporating images generated by our pseudo-rehearsal method enhances training on the targeted dataset and the source dataset.
arXiv Detail & Related papers (2024-04-18T15:28:34Z) - Joint Learning of Deep Texture and High-Frequency Features for
Computer-Generated Image Detection [24.098604827919203]
We propose a joint learning strategy with deep texture and high-frequency features for CG image detection.
A semantic segmentation map is generated to guide the affine transformation operation.
The combination of the original image and the high-frequency components of the original and rendered images are fed into a multi-branch neural network equipped with attention mechanisms.
arXiv Detail & Related papers (2022-09-07T17:30:40Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Research on facial expression recognition based on Multimodal data
fusion and neural network [2.5431493111705943]
The algorithm is based on the multimodal data, and it takes the facial image, the histogram of oriented gradient of the image and the facial landmarks as the input.
Experimental results show that, benefiting by the complementarity of multimodal data, the algorithm has a great improvement in accuracy, robustness and detection speed.
arXiv Detail & Related papers (2021-09-26T23:45:40Z) - Joint Learning of Neural Transfer and Architecture Adaptation for Image
Recognition [77.95361323613147]
Current state-of-the-art visual recognition systems rely on pretraining a neural network on a large-scale dataset and finetuning the network weights on a smaller dataset.
In this work, we prove that dynamically adapting network architectures tailored for each domain task along with weight finetuning benefits in both efficiency and effectiveness.
Our method can be easily generalized to an unsupervised paradigm by replacing supernet training with self-supervised learning in the source domain tasks and performing linear evaluation in the downstream tasks.
arXiv Detail & Related papers (2021-03-31T08:15:17Z) - A 3D GAN for Improved Large-pose Facial Recognition [3.791440300377753]
Facial recognition using deep convolutional neural networks relies on the availability of large datasets of face images.
Recent studies have shown that current methods of disentangling pose from identity are inadequate.
In this work we incorporate a 3D morphable model into the generator of a GAN in order to learn a nonlinear texture model from in-the-wild images.
This allows generation of new, synthetic identities, and manipulation of pose, illumination and expression without compromising the identity.
arXiv Detail & Related papers (2020-12-18T22:41:15Z) - Continuous Emotion Recognition with Spatiotemporal Convolutional Neural
Networks [82.54695985117783]
We investigate the suitability of state-of-the-art deep learning architectures for continuous emotion recognition using long video sequences captured in-the-wild.
We have developed and evaluated convolutional recurrent neural networks combining 2D-CNNs and long short term-memory units, and inflated 3D-CNN models, which are built by inflating the weights of a pre-trained 2D-CNN model during fine-tuning.
arXiv Detail & Related papers (2020-11-18T13:42:05Z) - Generative Hierarchical Features from Synthesizing Images [65.66756821069124]
We show that learning to synthesize images can bring remarkable hierarchical visual features that are generalizable across a wide range of applications.
The visual feature produced by our encoder, termed as Generative Hierarchical Feature (GH-Feat), has strong transferability to both generative and discriminative tasks.
arXiv Detail & Related papers (2020-07-20T18:04:14Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.