Mutual Distillation Learning Network for Trajectory-User Linking
- URL: http://arxiv.org/abs/2205.03773v1
- Date: Sun, 8 May 2022 03:50:37 GMT
- Title: Mutual Distillation Learning Network for Trajectory-User Linking
- Authors: Wei Chen and Shuzhe Li and Chao Huang and Yanwei Yu and Yongguo Jiang
and Junyu Dong
- Abstract summary: Trajectory-User Linking (TUL) has been a challenging problem due to the sparsity in check-in mobility data.
We propose a novel Mutual distillation learning network to solve the TUL problem for sparse check-in mobility data, named MainTUL.
- Score: 30.954285341714
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Trajectory-User Linking (TUL), which links trajectories to users who generate
them, has been a challenging problem due to the sparsity in check-in mobility
data. Existing methods ignore the utilization of historical data or rich
contextual features in check-in data, resulting in poor performance for TUL
task. In this paper, we propose a novel Mutual distillation learning network to
solve the TUL problem for sparse check-in mobility data, named MainTUL.
Specifically, MainTUL is composed of a Recurrent Neural Network (RNN)
trajectory encoder that models sequential patterns of input trajectory and a
temporal-aware Transformer trajectory encoder that captures long-term time
dependencies for the corresponding augmented historical trajectories. Then, the
knowledge learned on historical trajectories is transferred between the two
trajectory encoders to guide the learning of both encoders to achieve mutual
distillation of information. Experimental results on two real-world check-in
mobility datasets demonstrate the superiority of MainTUL against
state-of-the-art baselines. The source code of our model is available at
https://github.com/Onedean/MainTUL.
Related papers
- HGTUL: A Hypergraph-based Model For Trajectory User Linking [2.9945319641858985]
Tray User Linking (TUL) links anonymous trajectories with users who generate them.
We propose a novel HyperGraph-based multi-perspective Trajectory User Linking model (HGTUL)
arXiv Detail & Related papers (2025-02-11T13:39:35Z) - Towards Stable and Storage-efficient Dataset Distillation: Matching Convexified Trajectory [53.37473225728298]
The rapid evolution of deep learning and large language models has led to an exponential growth in the demand for training data.
Matching Training Trajectories (MTT) has been a prominent approach, which replicates the training trajectory of an expert network on real data with a synthetic dataset.
We introduce a novel method called Matching Convexified Trajectory (MCT), which aims to provide better guidance for the student trajectory.
arXiv Detail & Related papers (2024-06-28T11:06:46Z) - Revisiting CNNs for Trajectory Similarity Learning [20.311950784166388]
We introduce ConvTraj, incorporating both 1D and 2D convolutions to capture sequential and geo-distribution features of trajectories.
We show that ConvTraj achieves state-of-the-art accuracy in trajectory similarity search.
arXiv Detail & Related papers (2024-05-30T07:16:03Z) - TCCT-Net: Two-Stream Network Architecture for Fast and Efficient Engagement Estimation via Behavioral Feature Signals [58.865901821451295]
We present a novel two-stream feature fusion "Tensor-Convolution and Convolution-Transformer Network" (TCCT-Net) architecture.
To better learn the meaningful patterns in the temporal-spatial domain, we design a "CT" stream that integrates a hybrid convolutional-transformer.
In parallel, to efficiently extract rich patterns from the temporal-frequency domain, we introduce a "TC" stream that uses Continuous Wavelet Transform (CWT) to represent information in a 2D tensor form.
arXiv Detail & Related papers (2024-04-15T06:01:48Z) - Convolutional Neural Networks for the classification of glitches in
gravitational-wave data streams [52.77024349608834]
We classify transient noise signals (i.e.glitches) and gravitational waves in data from the Advanced LIGO detectors.
We use models with a supervised learning approach, both trained from scratch using the Gravity Spy dataset.
We also explore a self-supervised approach, pre-training models with automatically generated pseudo-labels.
arXiv Detail & Related papers (2023-03-24T11:12:37Z) - Characterization of anomalous diffusion through convolutional
transformers [0.8984888893275713]
We propose a new transformer based neural network architecture for the characterization of anomalous diffusion.
Our new architecture, the Convolutional Transformer (ConvTransformer), uses a bi-layered convolutional neural network to extract features from our diffusive trajectories.
We show that the ConvTransformer is able to outperform the previous state of the art at determining the underlying diffusive regime in short trajectories.
arXiv Detail & Related papers (2022-10-10T18:53:13Z) - Joint Spatial-Temporal and Appearance Modeling with Transformer for
Multiple Object Tracking [59.79252390626194]
We propose a novel solution named TransSTAM, which leverages Transformer to model both the appearance features of each object and the spatial-temporal relationships among objects.
The proposed method is evaluated on multiple public benchmarks including MOT16, MOT17, and MOT20, and it achieves a clear performance improvement in both IDF1 and HOTA.
arXiv Detail & Related papers (2022-05-31T01:19:18Z) - Probing transfer learning with a model of synthetic correlated datasets [11.53207294639557]
Transfer learning can significantly improve the sample efficiency of neural networks.
We re-think a solvable model of synthetic data as a framework for modeling correlation between data-sets.
We show that our model can capture a range of salient features of transfer learning with real data.
arXiv Detail & Related papers (2021-06-09T22:15:41Z) - Cascaded Regression Tracking: Towards Online Hard Distractor
Discrimination [202.2562153608092]
We propose a cascaded regression tracker with two sequential stages.
In the first stage, we filter out abundant easily-identified negative candidates.
In the second stage, a discrete sampling based ridge regression is designed to double-check the remaining ambiguous hard samples.
arXiv Detail & Related papers (2020-06-18T07:48:01Z) - Recent Developments Combining Ensemble Smoother and Deep Generative
Networks for Facies History Matching [58.720142291102135]
This research project focuses on the use of autoencoders networks to construct a continuous parameterization for facies models.
We benchmark seven different formulations, including VAE, generative adversarial network (GAN), Wasserstein GAN, variational auto-encoding GAN, principal component analysis (PCA) with cycle GAN, PCA with transfer style network and VAE with style loss.
arXiv Detail & Related papers (2020-05-08T21:32:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.