Learning with Signatures
- URL: http://arxiv.org/abs/2204.07953v1
- Date: Sun, 17 Apr 2022 08:36:15 GMT
- Title: Learning with Signatures
- Authors: J. de Curt\`o and I. de Zarz\`a and Carlos T. Calafate and Hong Yan
- Abstract summary: We advance a supervised framework that provides state-of-the-art classification accuracy with the use of very few labels without the need of credit assignment and with minimal or no overfitting.
We leverage tools from harmonic analysis by the use of the signature and log-signature and use as a score function RMSE and MAE Signature and log-signature.
- Score: 8.569235370614145
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work we investigate the use of the Signature Transform in the context
of Learning. Under this assumption, we advance a supervised framework that
provides state-of-the-art classification accuracy with the use of very few
labels without the need of credit assignment and with minimal or no
overfitting. We leverage tools from harmonic analysis by the use of the
signature and log-signature and use as a score function RMSE and MAE Signature
and log-signature. We develop a closed-form equation to compute probably good
optimal scale factors. Classification is performed at the CPU level orders of
magnitude faster than other methods. We report results on AFHQ dataset, Four
Shapes, MNIST and CIFAR10 achieving 100% accuracy on all tasks.
Related papers
- One-bit Supervision for Image Classification: Problem, Solution, and
Beyond [114.95815360508395]
This paper presents one-bit supervision, a novel setting of learning with fewer labels, for image classification.
We propose a multi-stage training paradigm and incorporate negative label suppression into an off-the-shelf semi-supervised learning algorithm.
In multiple benchmarks, the learning efficiency of the proposed approach surpasses that using full-bit, semi-supervised supervision.
arXiv Detail & Related papers (2023-11-26T07:39:00Z) - Binary Classification with Positive Labeling Sources [71.37692084951355]
We propose WEAPO, a simple yet competitive WS method for producing training labels without negative labeling sources.
We show WEAPO achieves the highest averaged performance on 10 benchmark datasets.
arXiv Detail & Related papers (2022-08-02T19:32:08Z) - Self-Adaptive Label Augmentation for Semi-supervised Few-shot
Classification [121.63992191386502]
Few-shot classification aims to learn a model that can generalize well to new tasks when only a few labeled samples are available.
We propose a semi-supervised few-shot classification method that assigns an appropriate label to each unlabeled sample by a manually defined metric.
A major novelty of SALA is the task-adaptive metric, which can learn the metric adaptively for different tasks in an end-to-end fashion.
arXiv Detail & Related papers (2022-06-16T13:14:03Z) - Revealing Reliable Signatures by Learning Top-Rank Pairs [15.582774097442721]
Signature verification is a crucial practical documentation analysis task.
We propose a new method to learn "top-rank pairs" for writer-independent offline signature verification tasks.
arXiv Detail & Related papers (2022-03-17T08:20:19Z) - How Fine-Tuning Allows for Effective Meta-Learning [50.17896588738377]
We present a theoretical framework for analyzing representations derived from a MAML-like algorithm.
We provide risk bounds on the best predictor found by fine-tuning via gradient descent, demonstrating that the algorithm can provably leverage the shared structure.
This separation result underscores the benefit of fine-tuning-based methods, such as MAML, over methods with "frozen representation" objectives in few-shot learning.
arXiv Detail & Related papers (2021-05-05T17:56:00Z) - Cross-domain Speech Recognition with Unsupervised Character-level
Distribution Matching [60.8427677151492]
We propose CMatch, a Character-level distribution matching method to perform fine-grained adaptation between each character in two domains.
Experiments on the Libri-Adapt dataset show that our proposed approach achieves 14.39% and 16.50% relative Word Error Rate (WER) reduction on both cross-device and cross-environment ASR.
arXiv Detail & Related papers (2021-04-15T14:36:54Z) - SimPLE: Similar Pseudo Label Exploitation for Semi-Supervised
Classification [24.386165255835063]
A common classification task situation is where one has a large amount of data available for training, but only a small portion is with class labels.
The goal of semi-supervised training, in this context, is to improve classification accuracy by leverage information from a large amount of unlabeled data.
We propose a novel unsupervised objective that focuses on the less studied relationship between the high confidence unlabeled data that are similar to each other.
Our proposed SimPLE algorithm shows significant performance gains over previous algorithms on CIFAR-100 and Mini-ImageNet, and is on par with the state-of-the-art methods
arXiv Detail & Related papers (2021-03-30T23:48:06Z) - Sinkhorn Label Allocation: Semi-Supervised Classification via Annealed
Self-Training [38.81973113564937]
Self-training is a standard approach to semi-supervised learning where the learner's own predictions on unlabeled data are used as supervision during training.
In this paper, we reinterpret this label assignment problem as an optimal transportation problem between examples and classes.
We demonstrate the effectiveness of our algorithm on the CIFAR-10, CIFAR-100, and SVHN datasets in comparison with FixMatch, a state-of-the-art self-training algorithm.
arXiv Detail & Related papers (2021-02-17T08:23:15Z) - SLADE: A Self-Training Framework For Distance Metric Learning [75.54078592084217]
We present a self-training framework, SLADE, to improve retrieval performance by leveraging additional unlabeled data.
We first train a teacher model on the labeled data and use it to generate pseudo labels for the unlabeled data.
We then train a student model on both labels and pseudo labels to generate final feature embeddings.
arXiv Detail & Related papers (2020-11-20T08:26:10Z) - Learning Graph-Based Priors for Generalized Zero-Shot Learning [21.43100823741393]
zero-shot learning (ZSL) requires correctly predicting the label of samples from classes which were unseen at training time.
Recent approaches to GZSL have shown the value of generative models, which are used to generate samples from unseen classes.
In this work, we incorporate an additional source of side information in the form of a relation graph over labels.
arXiv Detail & Related papers (2020-10-22T01:20:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.