Deep Image Feature Learning with Fuzzy Rules
- URL: http://arxiv.org/abs/1905.10575v3
- Date: Fri, 17 Mar 2023 06:10:49 GMT
- Title: Deep Image Feature Learning with Fuzzy Rules
- Authors: Xiang Ma, Liangzhe Chen, Zhaohong Deng, Peng Xu, Qisheng Yan, Kup-Sze
Choi, Shitong Wang
- Abstract summary: The paper proposes a more interpretable and scalable feature learning method, i.e., deep image feature learning with fuzzy rules (DIFL-FR)
The method progressively learns image features through a layer-by-layer manner based on fuzzy rules, so the feature learning process can be better explained by the generated rules.
In addition, the method is under the settings of unsupervised learning and can be easily extended to scenes of supervised and semi-supervised learning.
- Score: 25.4399762282053
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The methods of extracting image features are the key to many image processing
tasks. At present, the most popular method is the deep neural network which can
automatically extract robust features through end-to-end training instead of
hand-crafted feature extraction. However, the deep neural network currently
faces many challenges: 1) its effectiveness is heavily dependent on large
datasets, so the computational complexity is very high; 2) it is usually
regarded as a black box model with poor interpretability. To meet the above
challenges, a more interpretable and scalable feature learning method, i.e.,
deep image feature learning with fuzzy rules (DIFL-FR), is proposed in the
paper, which combines the rule-based fuzzy modeling technique and the deep
stacked learning strategy. The method progressively learns image features
through a layer-by-layer manner based on fuzzy rules, so the feature learning
process can be better explained by the generated rules. More importantly, the
learning process of the method is only based on forward propagation without
back propagation and iterative learning, which results in the high learning
efficiency. In addition, the method is under the settings of unsupervised
learning and can be easily extended to scenes of supervised and semi-supervised
learning. Extensive experiments are conducted on image datasets of different
scales. The results obviously show the effectiveness of the proposed method.
Related papers
- EfficientTrain++: Generalized Curriculum Learning for Efficient Visual Backbone Training [79.96741042766524]
We reformulate the training curriculum as a soft-selection function.
We show that exposing the contents of natural images can be readily achieved by the intensity of data augmentation.
The resulting method, EfficientTrain++, is simple, general, yet surprisingly effective.
arXiv Detail & Related papers (2024-05-14T17:00:43Z) - One-Shot Image Restoration [0.0]
Experimental results demonstrate the applicability, robustness and computational efficiency of the proposed approach for supervised image deblurring and super-resolution.
Our results showcase significant improvement of learning models' sample efficiency, generalization and time complexity.
arXiv Detail & Related papers (2024-04-26T14:03:23Z) - Detecting Generated Images by Real Images Only [64.12501227493765]
Existing generated image detection methods detect visual artifacts in generated images or learn discriminative features from both real and generated images by massive training.
This paper approaches the generated image detection problem from a new perspective: Start from real images.
By finding the commonality of real images and mapping them to a dense subspace in feature space, the goal is that generated images, regardless of their generative model, are then projected outside the subspace.
arXiv Detail & Related papers (2023-11-02T03:09:37Z) - MOCA: Self-supervised Representation Learning by Predicting Masked Online Codebook Assignments [72.6405488990753]
Self-supervised learning can be used for mitigating the greedy needs of Vision Transformer networks.
We propose a single-stage and standalone method, MOCA, which unifies both desired properties.
We achieve new state-of-the-art results on low-shot settings and strong experimental results in various evaluation protocols.
arXiv Detail & Related papers (2023-07-18T15:46:20Z) - Genetic Programming-Based Evolutionary Deep Learning for Data-Efficient
Image Classification [3.9310727060473476]
This paper proposes a new genetic programming-based evolutionary deep learning approach to data-efficient image classification.
The new approach can automatically evolve variable-length models using many important operators from both image and classification domains.
A flexible multi-layer representation enables the new approach to automatically construct shallow or deep models/trees for different tasks.
arXiv Detail & Related papers (2022-09-27T08:10:16Z) - Image Super-Resolution with Deep Dictionary [12.18340575383456]
We propose an end-to-end super-resolution network with a deep dictionary (SRDD)
We show that explicit learning of high-resolution dictionary makes the network more robust for out-of-domain test images.
arXiv Detail & Related papers (2022-07-19T12:31:17Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - Towards Interpretable Deep Metric Learning with Structural Matching [86.16700459215383]
We present a deep interpretable metric learning (DIML) method for more transparent embedding learning.
Our method is model-agnostic, which can be applied to off-the-shelf backbone networks and metric learning methods.
We evaluate our method on three major benchmarks of deep metric learning including CUB200-2011, Cars196, and Stanford Online Products.
arXiv Detail & Related papers (2021-08-12T17:59:09Z) - Learning Visual Representations for Transfer Learning by Suppressing
Texture [38.901410057407766]
In self-supervised learning, texture as a low-level cue may provide shortcuts that prevent the network from learning higher level representations.
We propose to use classic methods based on anisotropic diffusion to augment training using images with suppressed texture.
We empirically show that our method achieves state-of-the-art results on object detection and image classification.
arXiv Detail & Related papers (2020-11-03T18:27:03Z) - Deep Unfolding Network for Image Super-Resolution [159.50726840791697]
This paper proposes an end-to-end trainable unfolding network which leverages both learning-based methods and model-based methods.
The proposed network inherits the flexibility of model-based methods to super-resolve blurry, noisy images for different scale factors via a single model.
arXiv Detail & Related papers (2020-03-23T17:55:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.