Learning Local Complex Features using Randomized Neural Networks for
Texture Analysis
- URL: http://arxiv.org/abs/2007.05643v2
- Date: Mon, 17 Aug 2020 18:51:30 GMT
- Title: Learning Local Complex Features using Randomized Neural Networks for
Texture Analysis
- Authors: Lucas C. Ribas, Leonardo F. S. Scabini, Jarbas Joaci de Mesquita S\'a
Junior and Odemir M. Bruno
- Abstract summary: We present a new approach that combines a learning technique and the Complex Network (CN) theory for texture analysis.
This method takes advantage of the representation capacity of CN to model a texture image as a directed network.
This neural network has a single hidden layer and uses a fast learning algorithm, which is able to learn local CN patterns for texture characterization.
- Score: 0.1474723404975345
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Texture is a visual attribute largely used in many problems of image
analysis. Currently, many methods that use learning techniques have been
proposed for texture discrimination, achieving improved performance over
previous handcrafted methods. In this paper, we present a new approach that
combines a learning technique and the Complex Network (CN) theory for texture
analysis. This method takes advantage of the representation capacity of CN to
model a texture image as a directed network and uses the topological
information of vertices to train a randomized neural network. This neural
network has a single hidden layer and uses a fast learning algorithm, which is
able to learn local CN patterns for texture characterization. Thus, we use the
weighs of the trained neural network to compose a feature vector. These feature
vectors are evaluated in a classification experiment in four widely used image
databases. Experimental results show a high classification performance of the
proposed method when compared to other methods, indicating that our approach
can be used in many image analysis problems.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Joint Learning of Deep Texture and High-Frequency Features for
Computer-Generated Image Detection [24.098604827919203]
We propose a joint learning strategy with deep texture and high-frequency features for CG image detection.
A semantic segmentation map is generated to guide the affine transformation operation.
The combination of the original image and the high-frequency components of the original and rendered images are fed into a multi-branch neural network equipped with attention mechanisms.
arXiv Detail & Related papers (2022-09-07T17:30:40Z) - Multilayer deep feature extraction for visual texture recognition [0.0]
This paper is focused on improving the accuracy of convolutional neural networks in texture classification.
It is done by extracting features from multiple convolutional layers of a pretrained neural network and aggregating such features using Fisher vector.
We verify the effectiveness of our method on texture classification of benchmark datasets, as well as on a practical task of Brazilian plant species identification.
arXiv Detail & Related papers (2022-08-22T03:53:43Z) - Adaptive Convolutional Dictionary Network for CT Metal Artifact
Reduction [62.691996239590125]
We propose an adaptive convolutional dictionary network (ACDNet) for metal artifact reduction.
Our ACDNet can automatically learn the prior for artifact-free CT images via training data and adaptively adjust the representation kernels for each input CT image.
Our method inherits the clear interpretability of model-based methods and maintains the powerful representation ability of learning-based methods.
arXiv Detail & Related papers (2022-05-16T06:49:36Z) - VisGraphNet: a complex network interpretation of convolutional neural
features [6.50413414010073]
We propose and investigate the use of visibility graphs to model the feature map of a neural network.
The work is motivated by an alternative viewpoint provided by these graphs over the original data.
arXiv Detail & Related papers (2021-08-27T20:21:04Z) - Local Critic Training for Model-Parallel Learning of Deep Neural
Networks [94.69202357137452]
We propose a novel model-parallel learning method, called local critic training.
We show that the proposed approach successfully decouples the update process of the layer groups for both convolutional neural networks (CNNs) and recurrent neural networks (RNNs)
We also show that trained networks by the proposed method can be used for structural optimization.
arXiv Detail & Related papers (2021-02-03T09:30:45Z) - Noise-robust classification with hypergraph neural network [4.003697389752555]
This paper presents a novel version of the hypergraph neural network method.
The accuracies of these five methods are evaluated and compared.
Experimental results show that the hypergraph neural network methods achieve the best performance when the noise level increases.
arXiv Detail & Related papers (2021-02-03T08:34:53Z) - NAS-DIP: Learning Deep Image Prior with Neural Architecture Search [65.79109790446257]
Recent work has shown that the structure of deep convolutional neural networks can be used as a structured image prior.
We propose to search for neural architectures that capture stronger image priors.
We search for an improved network by leveraging an existing neural architecture search algorithm.
arXiv Detail & Related papers (2020-08-26T17:59:36Z) - Aggregated Learning: A Vector-Quantization Approach to Learning Neural
Network Classifiers [48.11796810425477]
We show that IB learning is, in fact, equivalent to a special class of the quantization problem.
We propose a novel learning framework, "Aggregated Learning", for classification with neural network models.
The effectiveness of this framework is verified through extensive experiments on standard image recognition and text classification tasks.
arXiv Detail & Related papers (2020-01-12T16:22:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.