Pairwise Difference Learning for Classification
- URL: http://arxiv.org/abs/2406.20031v1
- Date: Fri, 28 Jun 2024 16:20:22 GMT
- Title: Pairwise Difference Learning for Classification
- Authors: Mohamed Karim Belaid, Maximilian Rabus, Eyke Hüllermeier,
- Abstract summary: Pairwise difference learning (PDL) has recently been introduced as a new meta-learning technique for regression.
We extend PDL toward the task of classification by solving a suitably defined (binary) classification problem on a paired version of the original training data.
We provide an easy-to-use and publicly available implementation of PDL in a Python package.
- Score: 19.221081896134567
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Pairwise difference learning (PDL) has recently been introduced as a new meta-learning technique for regression. Instead of learning a mapping from instances to outcomes in the standard way, the key idea is to learn a function that takes two instances as input and predicts the difference between the respective outcomes. Given a function of this kind, predictions for a query instance are derived from every training example and then averaged. This paper extends PDL toward the task of classification and proposes a meta-learning technique for inducing a PDL classifier by solving a suitably defined (binary) classification problem on a paired version of the original training data. We analyze the performance of the PDL classifier in a large-scale empirical study and find that it outperforms state-of-the-art methods in terms of prediction performance. Last but not least, we provide an easy-to-use and publicly available implementation of PDL in a Python package.
Related papers
- A Closer Look at Benchmarking Self-Supervised Pre-training with Image Classification [51.35500308126506]
Self-supervised learning (SSL) is a machine learning approach where the data itself provides supervision, eliminating the need for external labels.
We study how classification-based evaluation protocols for SSL correlate and how well they predict downstream performance on different dataset types.
arXiv Detail & Related papers (2024-07-16T23:17:36Z) - A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation [121.0693322732454]
Contrastive Language-Image Pretraining (CLIP) has gained popularity for its remarkable zero-shot capacity.
Recent research has focused on developing efficient fine-tuning methods to enhance CLIP's performance in downstream tasks.
We revisit a classical algorithm, Gaussian Discriminant Analysis (GDA), and apply it to the downstream classification of CLIP.
arXiv Detail & Related papers (2024-02-06T15:45:27Z) - Prediction Error-based Classification for Class-Incremental Learning [39.91805363069707]
We introduce Prediction Error-based Classification (PEC)
PEC computes a class score by measuring the prediction error of a model trained to replicate the outputs of a frozen random neural network on data from that class.
PEC offers several practical advantages, including sample efficiency, ease of tuning, and effectiveness even when data are presented one class at a time.
arXiv Detail & Related papers (2023-05-30T07:43:35Z) - Class-Incremental Learning with Generative Classifiers [6.570917734205559]
We propose a new strategy for class-incremental learning: generative classification.
Our proposal is to learn the joint distribution p(x,y), factorized as p(x|y)p(y), and to perform classification using Bayes' rule.
As a proof-of-principle, here we implement this strategy by training a variational autoencoder for each class to be learned.
arXiv Detail & Related papers (2021-04-20T16:26:14Z) - An Empirical Comparison of Instance Attribution Methods for NLP [62.63504976810927]
We evaluate the degree to which different potential instance attribution agree with respect to the importance of training samples.
We find that simple retrieval methods yield training instances that differ from those identified via gradient-based methods.
arXiv Detail & Related papers (2021-04-09T01:03:17Z) - Contrastive Prototype Learning with Augmented Embeddings for Few-Shot
Learning [58.2091760793799]
We propose a novel contrastive prototype learning with augmented embeddings (CPLAE) model.
With a class prototype as an anchor, CPL aims to pull the query samples of the same class closer and those of different classes further away.
Extensive experiments on several benchmarks demonstrate that our proposed CPLAE achieves new state-of-the-art.
arXiv Detail & Related papers (2021-01-23T13:22:44Z) - Simultaneous Perturbation Stochastic Approximation for Few-Shot Learning [0.5801044612920815]
We propose a prototypical-like few-shot learning approach based on the prototypical networks method.
The results of experiments on the benchmark dataset demonstrate that the proposed method is superior to the original networks.
arXiv Detail & Related papers (2020-06-09T09:47:58Z) - Prototypical Contrastive Learning of Unsupervised Representations [171.3046900127166]
Prototypical Contrastive Learning (PCL) is an unsupervised representation learning method.
PCL implicitly encodes semantic structures of the data into the learned embedding space.
PCL outperforms state-of-the-art instance-wise contrastive learning methods on multiple benchmarks.
arXiv Detail & Related papers (2020-05-11T09:53:36Z) - Rethinking Class-Balanced Methods for Long-Tailed Visual Recognition
from a Domain Adaptation Perspective [98.70226503904402]
Object frequency in the real world often follows a power law, leading to a mismatch between datasets with long-tailed class distributions.
We propose to augment the classic class-balanced learning by explicitly estimating the differences between the class-conditioned distributions with a meta-learning approach.
arXiv Detail & Related papers (2020-03-24T11:28:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.