Deep Inverse Feature Learning: A Representation Learning of Error
- URL: http://arxiv.org/abs/2003.04285v1
- Date: Mon, 9 Mar 2020 17:45:44 GMT
- Title: Deep Inverse Feature Learning: A Representation Learning of Error
- Authors: Behzad Ghazanfari, Fatemeh Afghah
- Abstract summary: This paper introduces a novel perspective about error in machine learning and proposes inverse feature learning (IFL) as a representation learning approach.
Inverse feature learning method operates based on a deep clustering approach to obtain a qualitative form of the representation of error as features.
The experimental results show that the proposed method leads to promising results in classification and especially in clustering.
- Score: 6.5358895450258325
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces a novel perspective about error in machine learning and
proposes inverse feature learning (IFL) as a representation learning approach
that learns a set of high-level features based on the representation of error
for classification or clustering purposes. The proposed perspective about error
representation is fundamentally different from current learning methods, where
in classification approaches they interpret the error as a function of the
differences between the true labels and the predicted ones or in clustering
approaches, in which the clustering objective functions such as compactness are
used. Inverse feature learning method operates based on a deep clustering
approach to obtain a qualitative form of the representation of error as
features. The performance of the proposed IFL method is evaluated by applying
the learned features along with the original features, or just using the
learned features in different classification and clustering techniques for
several data sets. The experimental results show that the proposed method leads
to promising results in classification and especially in clustering. In
classification, the proposed features along with the primary features improve
the results of most of the classification methods on several popular data sets.
In clustering, the performance of different clustering methods is considerably
improved on different data sets. There are interesting results that show some
few features of the representation of error capture highly informative aspects
of primary features. We hope this paper helps to utilize the error
representation learning in different feature learning domains.
Related papers
- Convolutional autoencoder-based multimodal one-class classification [80.52334952912808]
One-class classification refers to approaches of learning using data from a single class only.
We propose a deep learning one-class classification method suitable for multimodal data.
arXiv Detail & Related papers (2023-09-25T12:31:18Z) - A Study on Representation Transfer for Few-Shot Learning [5.717951523323085]
Few-shot classification aims to learn to classify new object categories well using only a few labeled examples.
In this work we perform a systematic study of various feature representations for few-shot classification.
We find that learning from more complex tasks tend to give better representations for few-shot classification.
arXiv Detail & Related papers (2022-09-05T17:56:02Z) - An Additive Instance-Wise Approach to Multi-class Model Interpretation [53.87578024052922]
Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system.
Existing methods mainly focus on selecting explanatory input features, which follow either locally additive or instance-wise approaches.
This work exploits the strengths of both methods and proposes a global framework for learning local explanations simultaneously for multiple target classes.
arXiv Detail & Related papers (2022-07-07T06:50:27Z) - Fine-Grained Visual Classification using Self Assessment Classifier [12.596520707449027]
Extracting discriminative features plays a crucial role in the fine-grained visual classification task.
In this paper, we introduce a Self Assessment, which simultaneously leverages the representation of the image and top-k prediction classes.
We show that our method achieves new state-of-the-art results on CUB200-2011, Stanford Dog, and FGVC Aircraft datasets.
arXiv Detail & Related papers (2022-05-21T07:41:27Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - Discriminative Attribution from Counterfactuals [64.94009515033984]
We present a method for neural network interpretability by combining feature attribution with counterfactual explanations.
We show that this method can be used to quantitatively evaluate the performance of feature attribution methods in an objective manner.
arXiv Detail & Related papers (2021-09-28T00:53:34Z) - Clustering-friendly Representation Learning via Instance Discrimination
and Feature Decorrelation [0.0]
We propose a clustering-friendly representation learning method using instance discrimination and feature decorrelation.
In evaluations of image clustering using CIFAR-10 and ImageNet-10, our method achieves accuracy of 81.5% and 95.4%, respectively.
arXiv Detail & Related papers (2021-05-31T22:59:31Z) - Graph Contrastive Clustering [131.67881457114316]
We propose a novel graph contrastive learning framework, which is then applied to the clustering task and we come up with the Graph Constrastive Clustering(GCC) method.
Specifically, on the one hand, the graph Laplacian based contrastive loss is proposed to learn more discriminative and clustering-friendly features.
On the other hand, a novel graph-based contrastive learning strategy is proposed to learn more compact clustering assignments.
arXiv Detail & Related papers (2021-04-03T15:32:49Z) - Neural Networks as Functional Classifiers [0.0]
We extend notable deep learning methodologies to the domain of functional data for the purpose of classification problems.
We highlight the effectiveness of our method in a number of classification applications such as classification of spectrographic data.
arXiv Detail & Related papers (2020-10-09T00:11:01Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z) - Inverse Feature Learning: Feature learning based on Representation
Learning of Error [12.777440204911022]
This paper proposes inverse feature learning as a novel supervised feature learning technique that learns a set of high-level features for classification based on an error representation approach.
The proposed method results in significantly better performance compared to the state-of-the-art classification techniques for several popular data sets.
arXiv Detail & Related papers (2020-03-08T00:22:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.