Offline Writer Identification Using Convolutional Neural Network
Activation Features
- URL: http://arxiv.org/abs/2402.17029v1
- Date: Mon, 26 Feb 2024 21:16:14 GMT
- Title: Offline Writer Identification Using Convolutional Neural Network
Activation Features
- Authors: Vincent Christlein, David Bernecker, Andreas Maier, Elli Angelopoulou
- Abstract summary: Convolutional neural networks (CNNs) have recently become the state-of-the-art tool for large-scale image classification.
In this work we propose the use of activation features from CNNs as local descriptors for writer identification.
We evaluate our method on two publicly available datasets: the ICDAR 2013 benchmark database and the CVL dataset.
- Score: 6.589323210821262
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional neural networks (CNNs) have recently become the
state-of-the-art tool for large-scale image classification. In this work we
propose the use of activation features from CNNs as local descriptors for
writer identification. A global descriptor is then formed by means of GMM
supervector encoding, which is further improved by normalization with the
KL-Kernel. We evaluate our method on two publicly available datasets: the ICDAR
2013 benchmark database and the CVL dataset. While we perform comparably to the
state of the art on CVL, our proposed method yields about 0.21 absolute
improvement in terms of mAP on the challenging bilingual ICDAR dataset.
Related papers
- DCNN: Dual Cross-current Neural Networks Realized Using An Interactive Deep Learning Discriminator for Fine-grained Objects [48.65846477275723]
This study proposes novel dual-current neural networks (DCNN) to improve the accuracy of fine-grained image classification.
The main novel design features for constructing a weakly supervised learning backbone model DCNN include (a) extracting heterogeneous data, (b) keeping the feature map resolution unchanged, (c) expanding the receptive field, and (d) fusing global representations and local features.
arXiv Detail & Related papers (2024-05-07T07:51:28Z) - Integrating Graph Neural Networks with Scattering Transform for Anomaly Detection [0.0]
We present two novel methods in Network Intrusion Detection Systems (NIDS) using Graph Neural Networks (GNNs)
The first approach, Scattering Transform with E-GraphSAGE (STEG), utilizes the scattering transform to conduct multi-resolution analysis of edge feature vectors.
The second approach improves node representation by initiating with Node2Vec, diverging from standard methods of using uniform values.
arXiv Detail & Related papers (2024-04-16T00:02:12Z) - Applying Self-supervised Learning to Network Intrusion Detection for
Network Flows with Graph Neural Network [8.318363497010969]
This paper studies the application of GNNs to identify the specific types of network flows in an unsupervised manner.
To the best of our knowledge, it is the first GNN-based self-supervised method for the multiclass classification of network flows in NIDS.
arXiv Detail & Related papers (2024-03-03T12:34:13Z) - Neural Attentive Circuits [93.95502541529115]
We introduce a general purpose, yet modular neural architecture called Neural Attentive Circuits (NACs)
NACs learn the parameterization and a sparse connectivity of neural modules without using domain knowledge.
NACs achieve an 8x speedup at inference time while losing less than 3% performance.
arXiv Detail & Related papers (2022-10-14T18:00:07Z) - Interpolation-based Correlation Reduction Network for Semi-Supervised
Graph Learning [49.94816548023729]
We propose a novel graph contrastive learning method, termed Interpolation-based Correlation Reduction Network (ICRN)
In our method, we improve the discriminative capability of the latent feature by enlarging the margin of decision boundaries.
By combining the two settings, we extract rich supervision information from both the abundant unlabeled nodes and the rare yet valuable labeled nodes for discnative representation learning.
arXiv Detail & Related papers (2022-06-06T14:26:34Z) - Transferring ConvNet Features from Passive to Active Robot
Self-Localization: The Use of Ego-Centric and World-Centric Views [2.362412515574206]
A standard VPR subsystem is assumed to be available, and its domain-invariant state recognition ability is proposed to be transferred to train the domain-invariant NBV planner.
We divide the visual cues that are available from the CNN model into two types: the output layer cue (OLC) and intermediate layer cue (ILC)
In our framework, the ILC and OLC are mapped to a state vector and subsequently used to train a multiview NBV planner via deep reinforcement learning.
arXiv Detail & Related papers (2022-04-22T04:42:33Z) - TC-Net: Triple Context Network for Automated Stroke Lesion Segmentation [0.5482532589225552]
We propose a new network, Triple Context Network (TC-Net), with the capture of spatial contextual information as the core.
Our network is evaluated on the open dataset ATLAS, achieving the highest score of 0.594, Hausdorff distance of 27.005 mm, and average symmetry surface distance of 7.137 mm.
arXiv Detail & Related papers (2022-02-28T11:12:16Z) - Unsupervised Representation Learning via Neural Activation Coding [66.65837512531729]
We present neural activation coding (NAC) as a novel approach for learning deep representations from unlabeled data for downstream applications.
We show that NAC learns both continuous and discrete representations of data, which we respectively evaluate on two downstream tasks.
arXiv Detail & Related papers (2021-12-07T21:59:45Z) - Train your classifier first: Cascade Neural Networks Training from upper
layers to lower layers [54.47911829539919]
We develop a novel top-down training method which can be viewed as an algorithm for searching for high-quality classifiers.
We tested this method on automatic speech recognition (ASR) tasks and language modelling tasks.
The proposed method consistently improves recurrent neural network ASR models on Wall Street Journal, self-attention ASR models on Switchboard, and AWD-LSTM language models on WikiText-2.
arXiv Detail & Related papers (2021-02-09T08:19:49Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.