Contrastive learning of strong-mixing continuous-time stochastic
processes
- URL: http://arxiv.org/abs/2103.02740v1
- Date: Wed, 3 Mar 2021 23:06:47 GMT
- Title: Contrastive learning of strong-mixing continuous-time stochastic
processes
- Authors: Bingbin Liu, Pradeep Ravikumar, Andrej Risteski
- Abstract summary: Contrastive learning is a family of self-supervised methods where a model is trained to solve a classification task constructed from unlabeled data.
We show that a properly constructed contrastive learning task can be used to estimate the transition kernel for small-to-mid-range intervals in the diffusion case.
- Score: 53.82893653745542
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrastive learning is a family of self-supervised methods where a model is
trained to solve a classification task constructed from unlabeled data. It has
recently emerged as one of the leading learning paradigms in the absence of
labels across many different domains (e.g. brain imaging, text, images).
However, theoretical understanding of many aspects of training, both
statistical and algorithmic, remain fairly elusive.
In this work, we study the setting of time series -- more precisely, when we
get data from a strong-mixing continuous-time stochastic process. We show that
a properly constructed contrastive learning task can be used to estimate the
transition kernel for small-to-mid-range intervals in the diffusion case.
Moreover, we give sample complexity bounds for solving this task and
quantitatively characterize what the value of the contrastive loss implies for
distributional closeness of the learned kernel. As a byproduct, we illuminate
the appropriate settings for the contrastive distribution, as well as other
hyperparameters in this setup.
Related papers
- Probabilistic Contrastive Learning for Long-Tailed Visual Recognition [78.70453964041718]
Longtailed distributions frequently emerge in real-world data, where a large number of minority categories contain a limited number of samples.
Recent investigations have revealed that supervised contrastive learning exhibits promising potential in alleviating the data imbalance.
We propose a novel probabilistic contrastive (ProCo) learning algorithm that estimates the data distribution of the samples from each class in the feature space.
arXiv Detail & Related papers (2024-03-11T13:44:49Z) - ProbMCL: Simple Probabilistic Contrastive Learning for Multi-label Visual Classification [16.415582577355536]
Multi-label image classification presents a challenging task in many domains, including computer vision and medical imaging.
Recent advancements have introduced graph-based and transformer-based methods to improve performance and capture label dependencies.
We propose Probabilistic Multi-label Contrastive Learning (ProbMCL), a novel framework to address these challenges.
arXiv Detail & Related papers (2024-01-02T22:15:20Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Time-series Generation by Contrastive Imitation [87.51882102248395]
We study a generative framework that seeks to combine the strengths of both: Motivated by a moment-matching objective to mitigate compounding error, we optimize a local (but forward-looking) transition policy.
At inference, the learned policy serves as the generator for iterative sampling, and the learned energy serves as a trajectory-level measure for evaluating sample quality.
arXiv Detail & Related papers (2023-11-02T16:45:25Z) - Multi-Task Self-Supervised Time-Series Representation Learning [3.31490164885582]
Time-series representation learning can extract representations from data with temporal dynamics and sparse labels.
We propose a new time-series representation learning method by combining the advantages of self-supervised tasks.
We evaluate the proposed framework on three downstream tasks: time-series classification, forecasting, and anomaly detection.
arXiv Detail & Related papers (2023-03-02T07:44:06Z) - Accelerated Probabilistic Marching Cubes by Deep Learning for
Time-Varying Scalar Ensembles [5.102811033640284]
This paper introduces a deep-learning-based approach to learning the level-set uncertainty for two-dimensional ensemble data.
We train the model using the first few time steps from time-varying ensemble data in our workflow.
We demonstrate that our trained model accurately infers uncertainty in level sets for new time steps and is up to 170X faster than that of the original probabilistic model.
arXiv Detail & Related papers (2022-07-15T02:35:41Z) - Aligned Multi-Task Gaussian Process [12.903751268469696]
Multi-task learning requires accurate identification of the correlations between tasks.
Traditional multi-task models do not account for this and subsequent errors in correlation estimation will result in poor predictive performance.
We introduce a method that automatically accounts for temporal misalignment in a unified generative model that improves predictive performance.
arXiv Detail & Related papers (2021-10-29T13:18:13Z) - MetaKernel: Learning Variational Random Features with Limited Labels [120.90737681252594]
Few-shot learning deals with the fundamental and challenging problem of learning from a few annotated samples, while being able to generalize well on new tasks.
We propose meta-learning kernels with random Fourier features for few-shot learning, we call Meta Kernel.
arXiv Detail & Related papers (2021-05-08T21:24:09Z) - Effective Proximal Methods for Non-convex Non-smooth Regularized
Learning [27.775096437736973]
We show that the independent sampling scheme tends to improve performance of the commonly-used uniform sampling scheme.
Our new analysis also derives a speed for the sampling than best one available so far.
arXiv Detail & Related papers (2020-09-14T16:41:32Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.