Texture image analysis based on joint of multi directions GLCM and local
ternary patterns
- URL: http://arxiv.org/abs/2209.01866v1
- Date: Mon, 5 Sep 2022 09:53:00 GMT
- Title: Texture image analysis based on joint of multi directions GLCM and local
ternary patterns
- Authors: Akshakhi Kumar Pritoonka, Faeze Kiani
- Abstract summary: Texture features can be used in many different applications in commuter vision or machine learning problems.
New approach is proposed based on combination of two texture descriptor, co-occurrence matrix and local ternary patterns.
Experimental results show that proposed approach provide higher classification rate in comparison with some state-of-the-art approaches.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Human visual brain use three main component such as color, texture and shape
to detect or identify environment and objects. Hence, texture analysis has been
paid much attention by scientific researchers in last two decades. Texture
features can be used in many different applications in commuter vision or
machine learning problems. Since now, many different approaches have been
proposed to classify textures. Most of them consider the classification
accuracy as the main challenge that should be improved. In this article, a new
approach is proposed based on combination of two efficient texture descriptor,
co-occurrence matrix and local ternary patterns (LTP). First of all, basic
local binary pattern and LTP are performed to extract local textural
information. Next, a subset of statistical features is extracted from
gray-level co-occurrence matrixes. Finally, concatenated features are used to
train classifiers. The performance is evaluated on Brodatz benchmark dataset in
terms of accuracy. Experimental results show that proposed approach provide
higher classification rate in comparison with some state-of-the-art approaches.
Related papers
- Deep Neural Networks Fused with Textures for Image Classification [20.58839604333332]
Fine-grained image classification is a challenging task in computer vision.
We propose a fusion approach to address FGIC by combining global texture with local patch-based information.
Our method has attained better classification accuracy over existing methods with notable margins.
arXiv Detail & Related papers (2023-08-03T15:21:08Z) - Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images [60.34381768479834]
Recent advancements in diffusion models have enabled the generation of realistic deepfakes from textual prompts in natural language.
We pioneer a systematic study on deepfake detection generated by state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-04-02T10:25:09Z) - Multilayer deep feature extraction for visual texture recognition [0.0]
This paper is focused on improving the accuracy of convolutional neural networks in texture classification.
It is done by extracting features from multiple convolutional layers of a pretrained neural network and aggregating such features using Fisher vector.
We verify the effectiveness of our method on texture classification of benchmark datasets, as well as on a practical task of Brazilian plant species identification.
arXiv Detail & Related papers (2022-08-22T03:53:43Z) - Multiscale Analysis for Improving Texture Classification [62.226224120400026]
This paper employs the Gaussian-Laplacian pyramid to treat different spatial frequency bands of a texture separately.
We aggregate features extracted from gray and color texture images using bio-inspired texture descriptors, information-theoretic measures, gray-level co-occurrence matrix features, and Haralick statistical features into a single feature vector.
arXiv Detail & Related papers (2022-04-21T01:32:22Z) - Generalizing Face Forgery Detection with High-frequency Features [63.33397573649408]
Current CNN-based detectors tend to overfit to method-specific color textures and thus fail to generalize.
We propose to utilize the high-frequency noises for face forgery detection.
The first is the multi-scale high-frequency feature extraction module that extracts high-frequency noises at multiple scales.
The second is the residual-guided spatial attention module that guides the low-level RGB feature extractor to concentrate more on forgery traces from a new perspective.
arXiv Detail & Related papers (2021-03-23T08:19:21Z) - Learning Statistical Texture for Semantic Segmentation [53.7443670431132]
We propose a novel Statistical Texture Learning Network (STLNet) for semantic segmentation.
For the first time, STLNet analyzes the distribution of low level information and efficiently utilizes them for the task.
Based on QCO, two modules are introduced: (1) Texture Enhance Module (TEM), to capture texture-related information and enhance the texture details; (2) Pyramid Texture Feature Extraction Module (PTFEM), to effectively extract the statistical texture features from multiple scales.
arXiv Detail & Related papers (2021-03-06T15:05:35Z) - Dynamic Texture Recognition via Nuclear Distances on Kernelized
Scattering Histogram Spaces [95.21606283608683]
This work proposes to describe dynamic textures as kernelized spaces of frame-wise feature vectors computed using the Scattering transform.
By combining these spaces with a basis-invariant metric, we get a framework that produces competitive results for nearest neighbor classification and state-of-the-art results for nearest class center classification.
arXiv Detail & Related papers (2021-02-01T13:54:24Z) - A cellular automata approach to local patterns for texture recognition [3.42658286826597]
We propose a method for texture descriptors that combines the representation power of complex objects by cellular automata with the known effectiveness of local descriptors in texture analysis.
Our proposal outperforms other classical and state-of-the-art approaches, especially in the real-world problem.
arXiv Detail & Related papers (2020-07-15T03:25:51Z) - On the Texture Bias for Few-Shot CNN Segmentation [21.349705243254423]
Convolutional Neural Networks (CNNs) are driven by shapes to perform visual recognition tasks.
Recent evidence suggests texture bias in CNNs provides higher performing models when learning on large labeled training datasets.
We propose a novel architecture that integrates a set of Difference of Gaussians (DoG) to attenuate high-frequency local components in the feature space.
arXiv Detail & Related papers (2020-03-09T11:55:47Z) - Image Matching across Wide Baselines: From Paper to Practice [80.9424750998559]
We introduce a comprehensive benchmark for local features and robust estimation algorithms.
Our pipeline's modular structure allows easy integration, configuration, and combination of different methods.
We show that with proper settings, classical solutions may still outperform the perceived state of the art.
arXiv Detail & Related papers (2020-03-03T15:20:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.