An Acceleration Method Based on Deep Learning and Multilinear Feature
Space
- URL: http://arxiv.org/abs/2110.08679v1
- Date: Sat, 16 Oct 2021 23:49:12 GMT
- Title: An Acceleration Method Based on Deep Learning and Multilinear Feature
Space
- Authors: Michel Vinagreiro Edson Kitani Armando Lagana Leopoldo Yoshioka
- Abstract summary: This paper presents an alternative approach based on the Multilinear Feature Space (MFS) method resorting to transfer learning from large CNN architectures.
The proposed method uses CNNs to generate feature maps, although it does not work as complexity reduction approach.
Our method, named AMFC, uses the transfer learning from pre-trained CNN to reduce the classification time of new sample image, with minimal accuracy loss.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computer vision plays a crucial role in Advanced Assistance Systems. Most
computer vision systems are based on Deep Convolutional Neural Networks (deep
CNN) architectures. However, the high computational resource to run a CNN
algorithm is demanding. Therefore, the methods to speed up computation have
become a relevant research issue. Even though several works on architecture
reduction found in the literature have not yet been achieved satisfactory
results for embedded real-time system applications. This paper presents an
alternative approach based on the Multilinear Feature Space (MFS) method
resorting to transfer learning from large CNN architectures. The proposed
method uses CNNs to generate feature maps, although it does not work as
complexity reduction approach. After the training process, the generated
features maps are used to create vector feature space. We use this new vector
space to make projections of any new sample to classify them. Our method, named
AMFC, uses the transfer learning from pre-trained CNN to reduce the
classification time of new sample image, with minimal accuracy loss. Our method
uses the VGG-16 model as the base CNN architecture for experiments; however,
the method works with any similar CNN model. Using the well-known Vehicle Image
Database and the German Traffic Sign Recognition Benchmark, we compared the
classification time of the original VGG-16 model with the AMFC method, and our
method is, on average, 17 times faster. The fast classification time reduces
the computational and memory demands in embedded applications requiring a large
CNN architecture.
Related papers
- Training Convolutional Neural Networks with the Forward-Forward
algorithm [1.74440662023704]
Forward Forward (FF) algorithm has up to now only been used in fully connected networks.
We show how the FF paradigm can be extended to CNNs.
Our FF-trained CNN, featuring a novel spatially-extended labeling technique, achieves a classification accuracy of 99.16% on the MNIST hand-written digits dataset.
arXiv Detail & Related papers (2023-12-22T18:56:35Z) - Transferability of Convolutional Neural Networks in Stationary Learning
Tasks [96.00428692404354]
We introduce a novel framework for efficient training of convolutional neural networks (CNNs) for large-scale spatial problems.
We show that a CNN trained on small windows of such signals achieves a nearly performance on much larger windows without retraining.
Our results show that the CNN is able to tackle problems with many hundreds of agents after being trained with fewer than ten.
arXiv Detail & Related papers (2023-07-21T13:51:45Z) - A Domain Decomposition-Based CNN-DNN Architecture for Model Parallel Training Applied to Image Recognition Problems [0.0]
A novel CNN-DNN architecture is proposed that naturally supports a model parallel training strategy.
The proposed approach can significantly accelerate the required training time compared to the global model.
Results show that the proposed approach can also help to improve the accuracy of the underlying classification problem.
arXiv Detail & Related papers (2023-02-13T18:06:59Z) - FlowNAS: Neural Architecture Search for Optical Flow Estimation [65.44079917247369]
We propose a neural architecture search method named FlowNAS to automatically find the better encoder architecture for flow estimation task.
Experimental results show that the discovered architecture with the weights inherited from the super-network achieves 4.67% F1-all error on KITTI.
arXiv Detail & Related papers (2022-07-04T09:05:25Z) - Large-Margin Representation Learning for Texture Classification [67.94823375350433]
This paper presents a novel approach combining convolutional layers (CLs) and large-margin metric learning for training supervised models on small datasets for texture classification.
The experimental results on texture and histopathologic image datasets have shown that the proposed approach achieves competitive accuracy with lower computational cost and faster convergence when compared to equivalent CNNs.
arXiv Detail & Related papers (2022-06-17T04:07:45Z) - A Novel Sleep Stage Classification Using CNN Generated by an Efficient
Neural Architecture Search with a New Data Processing Trick [4.365107026636095]
We propose an efficient five-sleep-consuming classification method using convolutional neural networks (CNNs) with a novel data processing trick.
We make full use of genetic algorithm (GA), NASG, to search for the best CNN architecture.
We verify convergence of our data processing trick and compare the performance of traditional CNNs before and after using our trick.
arXiv Detail & Related papers (2021-10-27T10:36:52Z) - Boggart: Accelerating Retrospective Video Analytics via Model-Agnostic
Ingest Processing [5.076419064097734]
Boggart is a retrospective video analytics system that delivers ingest-time speedups in a model-agnostic manner.
Our underlying insight is that traditional computer vision (CV) algorithms are capable of performing computations that can be used to accelerate diverse queries with wide-ranging CNNs.
At query-time, Boggart uses several novel techniques to collect the smallest sample of CNN results required to meet the target accuracy.
arXiv Detail & Related papers (2021-06-21T19:21:16Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - Learning CNN filters from user-drawn image markers for coconut-tree
image classification [78.42152902652215]
We present a method that needs a minimal set of user-selected images to train the CNN's feature extractor.
The method learns the filters of each convolutional layer from user-drawn markers in image regions that discriminate classes.
It does not rely on optimization based on backpropagation, and we demonstrate its advantages on the binary classification of coconut-tree aerial images.
arXiv Detail & Related papers (2020-08-08T15:50:23Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z) - Convolution Neural Network Architecture Learning for Remote Sensing
Scene Classification [22.29957803992306]
This paper proposes an automatically architecture learning procedure for remote sensing scene classification.
We introduce a learning strategy which can allow efficient search in the architecture space by means of gradient descent.
An architecture generator finally maps the set of parameters into the CNN used in our experiments.
arXiv Detail & Related papers (2020-01-27T07:42:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.