Semantic Clustering based Deduction Learning for Image Recognition and
Classification
- URL: http://arxiv.org/abs/2112.13165v1
- Date: Sat, 25 Dec 2021 01:31:21 GMT
- Title: Semantic Clustering based Deduction Learning for Image Recognition and
Classification
- Authors: Wenchi Ma, Xuemin Tu, Bo Luo, Guanghui Wang
- Abstract summary: The paper proposes a semantic clustering based deduction learning by mimicking the learning and thinking process of human brains.
The proposed approach is supported theoretically and empirically through extensive experiments.
- Score: 19.757743366620613
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The paper proposes a semantic clustering based deduction learning by
mimicking the learning and thinking process of human brains. Human beings can
make judgments based on experience and cognition, and as a result, no one would
recognize an unknown animal as a car. Inspired by this observation, we propose
to train deep learning models using the clustering prior that can guide the
models to learn with the ability of semantic deducing and summarizing from
classification attributes, such as a cat belonging to animals while a car
pertaining to vehicles. %Specifically, if an image is labeled as a cat, then
the model is trained to learn that "this image is totally not any random class
that is the outlier of animal". The proposed approach realizes the high-level
clustering in the semantic space, enabling the model to deduce the relations
among various classes during the learning process. In addition, the paper
introduces a semantic prior based random search for the opposite labels to
ensure the smooth distribution of the clustering and the robustness of the
classifiers. The proposed approach is supported theoretically and empirically
through extensive experiments. We compare the performance across
state-of-the-art classifiers on popular benchmarks, and the generalization
ability is verified by adding noisy labeling to the datasets. Experimental
results demonstrate the superiority of the proposed approach.
Related papers
- Explainable Metric Learning for Deflating Data Bias [2.977255700811213]
We present an explainable metric learning framework, which constructs hierarchical levels of semantic segments of an image for better interpretability.
Our approach enables a more human-understandable similarity measurement between two images based on the semantic segments within it.
arXiv Detail & Related papers (2024-07-05T21:07:27Z) - Accurate Explanation Model for Image Classifiers using Class Association Embedding [5.378105759529487]
We propose a generative explanation model that combines the advantages of global and local knowledge.
Class association embedding (CAE) encodes each sample into a pair of separated class-associated and individual codes.
Building-block coherency feature extraction algorithm is proposed that efficiently separates class-associated features from individual ones.
arXiv Detail & Related papers (2024-06-12T07:41:00Z) - Convolutional autoencoder-based multimodal one-class classification [80.52334952912808]
One-class classification refers to approaches of learning using data from a single class only.
We propose a deep learning one-class classification method suitable for multimodal data.
arXiv Detail & Related papers (2023-09-25T12:31:18Z) - Open-Set Recognition with Gradient-Based Representations [16.80077149399317]
We propose to utilize gradient-based representations to train an unknown detector with instances of known classes only.
We show that our gradient-based approach outperforms state-of-the-art methods by up to 11.6% in open-set classification.
arXiv Detail & Related papers (2022-06-16T14:54:12Z) - Resolving label uncertainty with implicit posterior models [71.62113762278963]
We propose a method for jointly inferring labels across a collection of data samples.
By implicitly assuming the existence of a generative model for which a differentiable predictor is the posterior, we derive a training objective that allows learning under weak beliefs.
arXiv Detail & Related papers (2022-02-28T18:09:44Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - GAN for Vision, KG for Relation: a Two-stage Deep Network for Zero-shot
Action Recognition [33.23662792742078]
We propose a two-stage deep neural network for zero-shot action recognition.
In the sampling stage, we utilize a generative adversarial networks (GAN) trained by action features and word vectors of seen classes.
In the classification stage, we construct a knowledge graph based on the relationship between word vectors of action classes and related objects.
arXiv Detail & Related papers (2021-05-25T09:34:42Z) - Intersection Regularization for Extracting Semantic Attributes [72.53481390411173]
We consider the problem of supervised classification, such that the features that the network extracts match an unseen set of semantic attributes.
For example, when learning to classify images of birds into species, we would like to observe the emergence of features that zoologists use to classify birds.
We propose training a neural network with discrete top-level activations, which is followed by a multi-layered perceptron (MLP) and a parallel decision tree.
arXiv Detail & Related papers (2021-03-22T14:32:44Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - Hierarchical Image Classification using Entailment Cone Embeddings [68.82490011036263]
We first inject label-hierarchy knowledge into an arbitrary CNN-based classifier.
We empirically show that availability of such external semantic information in conjunction with the visual semantics from images boosts overall performance.
arXiv Detail & Related papers (2020-04-02T10:22:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.