Inverse Feature Learning: Feature learning based on Representation
Learning of Error
- URL: http://arxiv.org/abs/2003.03689v1
- Date: Sun, 8 Mar 2020 00:22:26 GMT
- Title: Inverse Feature Learning: Feature learning based on Representation
Learning of Error
- Authors: Behzad Ghazanfari, Fatemeh Afghah, MohammadTaghi Hajiaghayi
- Abstract summary: This paper proposes inverse feature learning as a novel supervised feature learning technique that learns a set of high-level features for classification based on an error representation approach.
The proposed method results in significantly better performance compared to the state-of-the-art classification techniques for several popular data sets.
- Score: 12.777440204911022
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes inverse feature learning as a novel supervised feature
learning technique that learns a set of high-level features for classification
based on an error representation approach. The key contribution of this method
is to learn the representation of error as high-level features, while current
representation learning methods interpret error by loss functions which are
obtained as a function of differences between the true labels and the predicted
ones. One advantage of such learning method is that the learned features for
each class are independent of learned features for other classes; therefore,
this method can learn simultaneously meaning that it can learn new classes
without retraining. Error representation learning can also help with
generalization and reduce the chance of over-fitting by adding a set of
impactful features to the original data set which capture the relationships
between each instance and different classes through an error generation and
analysis process. This method can be particularly effective in data sets, where
the instances of each class have diverse feature representations or the ones
with imbalanced classes. The experimental results show that the proposed method
results in significantly better performance compared to the state-of-the-art
classification techniques for several popular data sets. We hope this paper can
open a new path to utilize the proposed perspective of error representation
learning in different feature learning domains.
Related papers
- Don't Forget Too Much: Towards Machine Unlearning on Feature Level [16.32116782528703]
We propose a refined granularity unlearning scheme referred to as feature unlearning"
We first explore two scenarios based on whether the annotation information about the features is given.
We propose an adversarial learning approach to automatically remove effects about features.
arXiv Detail & Related papers (2024-06-16T14:08:46Z) - Equivariance with Learned Canonicalization Functions [77.32483958400282]
We show that learning a small neural network to perform canonicalization is better than using predefineds.
Our experiments show that learning the canonicalization function is competitive with existing techniques for learning equivariant functions across many tasks.
arXiv Detail & Related papers (2022-11-11T21:58:15Z) - An Additive Instance-Wise Approach to Multi-class Model Interpretation [53.87578024052922]
Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system.
Existing methods mainly focus on selecting explanatory input features, which follow either locally additive or instance-wise approaches.
This work exploits the strengths of both methods and proposes a global framework for learning local explanations simultaneously for multiple target classes.
arXiv Detail & Related papers (2022-07-07T06:50:27Z) - A Similarity-based Framework for Classification Task [21.182406977328267]
Similarity-based method gives rise to a new class of methods for multi-label learning and also achieves promising performance.
We unite similarity-based learning and generalized linear models to achieve the best of both worlds.
arXiv Detail & Related papers (2022-03-05T06:39:50Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - Discriminative Attribution from Counterfactuals [64.94009515033984]
We present a method for neural network interpretability by combining feature attribution with counterfactual explanations.
We show that this method can be used to quantitatively evaluate the performance of feature attribution methods in an objective manner.
arXiv Detail & Related papers (2021-09-28T00:53:34Z) - Finding Significant Features for Few-Shot Learning using Dimensionality
Reduction [0.0]
This module helps to improve the accuracy performance by allowing the similarity function, given by the metric learning method, to have more discriminative features for the classification.
Our method outperforms the metric learning baselines in the miniImageNet dataset by around 2% in accuracy performance.
arXiv Detail & Related papers (2021-07-06T16:36:57Z) - Towards Improved and Interpretable Deep Metric Learning via Attentive
Grouping [103.71992720794421]
Grouping has been commonly used in deep metric learning for computing diverse features.
We propose an improved and interpretable grouping method to be integrated flexibly with any metric learning framework.
arXiv Detail & Related papers (2020-11-17T19:08:24Z) - An analysis on the use of autoencoders for representation learning:
fundamentals, learning task case studies, explainability and challenges [11.329636084818778]
In many machine learning tasks, learning a good representation of the data can be the key to building a well-performant solution.
We present a series of learning tasks: data embedding for visualization, image denoising, semantic hashing, detection of abnormal behaviors and instance generation.
A solution is proposed for each task employing autoencoders as the only learning method.
arXiv Detail & Related papers (2020-05-21T08:41:57Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z) - Deep Inverse Feature Learning: A Representation Learning of Error [6.5358895450258325]
This paper introduces a novel perspective about error in machine learning and proposes inverse feature learning (IFL) as a representation learning approach.
Inverse feature learning method operates based on a deep clustering approach to obtain a qualitative form of the representation of error as features.
The experimental results show that the proposed method leads to promising results in classification and especially in clustering.
arXiv Detail & Related papers (2020-03-09T17:45:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.