Pairwise Supervision Can Provably Elicit a Decision Boundary
- URL: http://arxiv.org/abs/2006.06207v2
- Date: Tue, 1 Mar 2022 04:40:36 GMT
- Title: Pairwise Supervision Can Provably Elicit a Decision Boundary
- Authors: Han Bao, Takuya Shimada, Liyuan Xu, Issei Sato, Masashi Sugiyama
- Abstract summary: Similarity learning is a problem to elicit useful representations by predicting the relationship between a pair of patterns.
We show that similarity learning is capable of solving binary classification by directly eliciting a decision boundary.
- Score: 84.58020117487898
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Similarity learning is a general problem to elicit useful representations by
predicting the relationship between a pair of patterns. This problem is related
to various important preprocessing tasks such as metric learning, kernel
learning, and contrastive learning. A classifier built upon the representations
is expected to perform well in downstream classification; however, little
theory has been given in literature so far and thereby the relationship between
similarity and classification has remained elusive. Therefore, we tackle a
fundamental question: can similarity information provably leads a model to
perform well in downstream classification? In this paper, we reveal that a
product-type formulation of similarity learning is strongly related to an
objective of binary classification. We further show that these two different
problems are explicitly connected by an excess risk bound. Consequently, our
results elucidate that similarity learning is capable of solving binary
classification by directly eliciting a decision boundary.
Related papers
- Simple and Interpretable Probabilistic Classifiers for Knowledge Graphs [0.0]
We describe an inductive approach based on learning simple belief networks.
We show how such models can be converted into (probabilistic) axioms (or rules)
arXiv Detail & Related papers (2024-07-09T17:05:52Z) - Neural Dependencies Emerging from Learning Massive Categories [94.77992221690742]
This work presents two astonishing findings on neural networks learned for large-scale image classification.
1) Given a well-trained model, the logits predicted for some category can be directly obtained by linearly combining the predictions of a few other categories.
2) Neural dependencies exist not only within a single model, but even between two independently learned models.
arXiv Detail & Related papers (2022-11-21T09:42:15Z) - Understanding Contrastive Learning Requires Incorporating Inductive
Biases [64.56006519908213]
Recent attempts to theoretically explain the success of contrastive learning on downstream tasks prove guarantees depending on properties of em augmentations and the value of em contrastive loss of representations.
We demonstrate that such analyses ignore em inductive biases of the function class and training algorithm, even em provably leading to vacuous guarantees in some settings.
arXiv Detail & Related papers (2022-02-28T18:59:20Z) - Logic-guided Semantic Representation Learning for Zero-Shot Relation
Classification [31.887770824130957]
We propose a novel logic-guided semantic representation learning model for zero-shot relation classification.
Our approach builds connections between seen and unseen relations via implicit and explicit semantic representations with knowledge graph embeddings and logic rules.
arXiv Detail & Related papers (2020-10-30T04:30:09Z) - Few-shot Visual Reasoning with Meta-analogical Contrastive Learning [141.2562447971]
We propose to solve a few-shot (or low-shot) visual reasoning problem, by resorting to analogical reasoning.
We extract structural relationships between elements in both domains, and enforce them to be as similar as possible with analogical learning.
We validate our method on RAVEN dataset, on which it outperforms state-of-the-art method, with larger gains when the training data is scarce.
arXiv Detail & Related papers (2020-07-23T14:00:34Z) - Learning Interclass Relations for Image Classification [0.0]
In standard classification, we typically treat class categories as independent of one-another.
In this work, we propose novel formulations of the classification problem, based on a realization that the assumption of class-independence is a limiting factor that leads to the requirement of more training data.
arXiv Detail & Related papers (2020-06-24T05:32:54Z) - Learning from Aggregate Observations [82.44304647051243]
We study the problem of learning from aggregate observations where supervision signals are given to sets of instances.
We present a general probabilistic framework that accommodates a variety of aggregate observations.
Simple maximum likelihood solutions can be applied to various differentiable models.
arXiv Detail & Related papers (2020-04-14T06:18:50Z) - Rethinking Class Relations: Absolute-relative Supervised and
Unsupervised Few-shot Learning [157.62595449130973]
We study the fundamental problem of simplistic class modeling in current few-shot learning methods.
We propose a novel Absolute-relative Learning paradigm to fully take advantage of label information to refine the image representations.
arXiv Detail & Related papers (2020-01-12T12:25:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.