Weakly-supervised Temporal Path Representation Learning with Contrastive
Curriculum Learning -- Extended Version
- URL: http://arxiv.org/abs/2203.16110v2
- Date: Fri, 1 Apr 2022 15:37:26 GMT
- Title: Weakly-supervised Temporal Path Representation Learning with Contrastive
Curriculum Learning -- Extended Version
- Authors: Sean Bin Yang, Chenjuan Guo, Jilin Hu, Bin Yang, Jian Tang, and
Christian S. Jensen
- Abstract summary: A temporal path(TP) that includes temporal information, e.g., departure time, into the path is of fundamental to enable such applications.
Existing methods fail to achieve the goal since (i) supervised methods require large amounts of task-specific labels when training and thus fail to generalize the obtained TPRs to other tasks.
We propose a Weakly-Supervised Contrastive (WSC) learning model that encodes both the spatial and temporal information of a temporal path into a TPR.
- Score: 35.86394282979721
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In step with the digitalization of transportation, we are witnessing a
growing range of path-based smart-city applications, e.g., travel-time
estimation and travel path ranking. A temporal path~(TP) that includes temporal
information, e.g., departure time, into the path is of fundamental to enable
such applications. In this setting, it is essential to learn generic temporal
path representations~(TPRs) that consider spatial and temporal correlations
simultaneously and that can be used in different applications, i.e., downstream
tasks. Existing methods fail to achieve the goal since (i) supervised methods
require large amounts of task-specific labels when training and thus fail to
generalize the obtained TPRs to other tasks; (ii) though unsupervised methods
can learn generic representations, they disregard the temporal aspect, leading
to sub-optimal results. To contend with the limitations of existing solutions,
we propose a Weakly-Supervised Contrastive (WSC) learning model. We first
propose a temporal path encoder that encodes both the spatial and temporal
information of a temporal path into a TPR. To train the encoder, we introduce
weak labels that are easy and inexpensive to obtain, and are relevant to
different tasks, e.g., temporal labels indicating peak vs. off-peak hour from
departure times. Based on the weak labels, we construct meaningful positive and
negative temporal path samples by considering both spatial and temporal
information, which facilities training the encoder using contrastive learning
by pulling closer the positive samples' representations while pushing away the
negative samples' representations. To better guide the contrastive learning, we
propose a learning strategy based on Curriculum Learning such that the learning
performs from easy to hard training instances. Experiments studies verify the
effectiveness of the proposed method.
Related papers
- Learning Discriminative Spatio-temporal Representations for Semi-supervised Action Recognition [23.44320273156057]
We propose an Adaptive Contrastive Learning(ACL) strategy and a Multi-scale Temporal Learning(MTL) strategy.
ACL strategy assesses the confidence of all unlabeled samples by the class prototypes of the labeled data, and adaptively selects positive-negative samples from a pseudo-labeled sample bank to construct contrastive learning.
MTL strategy could highlight informative semantics from long-term clips and integrate them into the short-term clip while suppressing noisy information.
arXiv Detail & Related papers (2024-04-25T08:49:08Z) - TimewarpVAE: Simultaneous Time-Warping and Representation Learning of Trajectories [15.28090738928877]
TimewarpVAE is a manifold-learning algorithm that simultaneously learns timing variations and latent factors of spatial variation.
We show how the algorithm learns appropriate time alignments and meaningful representations of spatial variations in handwriting and fork manipulation datasets.
arXiv Detail & Related papers (2023-10-24T17:43:16Z) - On the Importance of Spatial Relations for Few-shot Action Recognition [109.2312001355221]
In this paper, we investigate the importance of spatial relations and propose a more accurate few-shot action recognition method.
A novel Spatial Alignment Cross Transformer (SA-CT) learns to re-adjust the spatial relations and incorporates the temporal information.
Experiments reveal that, even without using any temporal information, the performance of SA-CT is comparable to temporal based methods on 3/4 benchmarks.
arXiv Detail & Related papers (2023-08-14T12:58:02Z) - Multi-Task Self-Supervised Time-Series Representation Learning [3.31490164885582]
Time-series representation learning can extract representations from data with temporal dynamics and sparse labels.
We propose a new time-series representation learning method by combining the advantages of self-supervised tasks.
We evaluate the proposed framework on three downstream tasks: time-series classification, forecasting, and anomaly detection.
arXiv Detail & Related papers (2023-03-02T07:44:06Z) - Contrastive Distillation Is a Sample-Efficient Self-Supervised Loss
Policy for Transfer Learning [20.76863234714442]
We propose a self-supervised loss policy called contrastive distillation which manifests latent variables with high mutual information.
We show how this outperforms common methods of transfer learning and suggests a useful design axis of trading off compute for online transfer.
arXiv Detail & Related papers (2022-12-21T20:43:46Z) - ALLSH: Active Learning Guided by Local Sensitivity and Hardness [98.61023158378407]
We propose to retrieve unlabeled samples with a local sensitivity and hardness-aware acquisition function.
Our method achieves consistent gains over the commonly used active learning strategies in various classification tasks.
arXiv Detail & Related papers (2022-05-10T15:39:11Z) - Leveraging Time Irreversibility with Order-Contrastive Pre-training [3.1848820580333737]
We explore an "order-contrastive" method for self-supervised pre-training on longitudinal data.
We prove a finite-sample guarantee for the downstream error of a representation learned with order-contrastive pre-training.
Our results indicate that pre-training methods designed for particular classes of distributions and downstream tasks can improve the performance of self-supervised learning.
arXiv Detail & Related papers (2021-11-04T02:56:52Z) - Contrastive learning of strong-mixing continuous-time stochastic
processes [53.82893653745542]
Contrastive learning is a family of self-supervised methods where a model is trained to solve a classification task constructed from unlabeled data.
We show that a properly constructed contrastive learning task can be used to estimate the transition kernel for small-to-mid-range intervals in the diffusion case.
arXiv Detail & Related papers (2021-03-03T23:06:47Z) - Geography-Aware Self-Supervised Learning [79.4009241781968]
We show that due to their different characteristics, a non-trivial gap persists between contrastive and supervised learning on standard benchmarks.
We propose novel training methods that exploit the spatially aligned structure of remote sensing data.
Our experiments show that our proposed method closes the gap between contrastive and supervised learning on image classification, object detection and semantic segmentation for remote sensing.
arXiv Detail & Related papers (2020-11-19T17:29:13Z) - Learning Invariant Representations for Reinforcement Learning without
Reconstruction [98.33235415273562]
We study how representation learning can accelerate reinforcement learning from rich observations, such as images, without relying either on domain knowledge or pixel-reconstruction.
Bisimulation metrics quantify behavioral similarity between states in continuous MDPs.
We demonstrate the effectiveness of our method at disregarding task-irrelevant information using modified visual MuJoCo tasks.
arXiv Detail & Related papers (2020-06-18T17:59:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.