Explainable Trajectory Representation through Dictionary Learning
- URL: http://arxiv.org/abs/2312.08052v1
- Date: Wed, 13 Dec 2023 10:59:54 GMT
- Title: Explainable Trajectory Representation through Dictionary Learning
- Authors: Yuanbo Tang, Zhiyuan Peng and Yang Li
- Abstract summary: Trajectory representation learning on a network enhances our understanding of vehicular traffic patterns.
Existing approaches using classic machine learning or deep learning embed trajectories as dense vectors, which lack interpretability.
This paper proposes an explainable trajectory representation learning framework through dictionary learning.
- Score: 7.567576186354494
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Trajectory representation learning on a network enhances our understanding of
vehicular traffic patterns and benefits numerous downstream applications.
Existing approaches using classic machine learning or deep learning embed
trajectories as dense vectors, which lack interpretability and are inefficient
to store and analyze in downstream tasks. In this paper, an explainable
trajectory representation learning framework through dictionary learning is
proposed. Given a collection of trajectories on a network, it extracts a
compact dictionary of commonly used subpaths called "pathlets", which optimally
reconstruct each trajectory by simple concatenations. The resulting
representation is naturally sparse and encodes strong spatial semantics.
Theoretical analysis of our proposed algorithm is conducted to provide a
probabilistic bound on the estimation error of the optimal dictionary. A
hierarchical dictionary learning scheme is also proposed to ensure the
algorithm's scalability on large networks, leading to a multi-scale trajectory
representation. Our framework is evaluated on two large-scale real-world taxi
datasets. Compared to previous work, the dictionary learned by our method is
more compact and has better reconstruction rate for new trajectories. We also
demonstrate the promising performance of this method in downstream tasks
including trip time prediction task and data compression.
Related papers
- Lightweight Conceptual Dictionary Learning for Text Classification Using Information Compression [15.460141768587663]
We propose a lightweight supervised dictionary learning framework for text classification based on data compression and representation.
We evaluate our algorithm's information-theoretic performance using information bottleneck principles and introduce the information plane area rank (IPAR) as a novel metric to quantify the information-theoretic performance.
arXiv Detail & Related papers (2024-04-28T10:11:52Z) - ALSO: Automotive Lidar Self-supervision by Occupancy estimation [70.70557577874155]
We propose a new self-supervised method for pre-training the backbone of deep perception models operating on point clouds.
The core idea is to train the model on a pretext task which is the reconstruction of the surface on which the 3D points are sampled.
The intuition is that if the network is able to reconstruct the scene surface, given only sparse input points, then it probably also captures some fragments of semantic information.
arXiv Detail & Related papers (2022-12-12T13:10:19Z) - Large-Margin Representation Learning for Texture Classification [67.94823375350433]
This paper presents a novel approach combining convolutional layers (CLs) and large-margin metric learning for training supervised models on small datasets for texture classification.
The experimental results on texture and histopathologic image datasets have shown that the proposed approach achieves competitive accuracy with lower computational cost and faster convergence when compared to equivalent CNNs.
arXiv Detail & Related papers (2022-06-17T04:07:45Z) - Towards Interpretable Deep Metric Learning with Structural Matching [86.16700459215383]
We present a deep interpretable metric learning (DIML) method for more transparent embedding learning.
Our method is model-agnostic, which can be applied to off-the-shelf backbone networks and metric learning methods.
We evaluate our method on three major benchmarks of deep metric learning including CUB200-2011, Cars196, and Stanford Online Products.
arXiv Detail & Related papers (2021-08-12T17:59:09Z) - Semantics-STGCNN: A Semantics-guided Spatial-Temporal Graph
Convolutional Network for Multi-class Trajectory Prediction [9.238700679836855]
We introduce class information into a graph convolutional neural network to better predict the trajectory of an individual.
We propose new metrics, known as Average2 Displacement Error (aADE) and Average Final Displacement Error (aFDE)
It consistently shows superior performance to the state-of-the-arts in existing and the newly proposed metrics.
arXiv Detail & Related papers (2021-08-10T15:02:50Z) - PUDLE: Implicit Acceleration of Dictionary Learning by Backpropagation [4.081440927534577]
This paper offers the first theoretical proof for empirical results through PUDLE, a Provable Unfolded Dictionary LEarning method.
We highlight the minimization impact of loss, unfolding, and backpropagation on convergence.
We complement our findings through synthetic and image denoising experiments.
arXiv Detail & Related papers (2021-05-31T18:49:58Z) - A Domain-Oblivious Approach for Learning Concise Representations of
Filtered Topological Spaces [7.717214217542406]
We propose a persistence diagram hashing framework that learns a binary code representation of persistence diagrams.
This framework is built upon a generative adversarial network (GAN) with a diagram distance loss function to steer the learning process.
Our proposed method is directly applicable to various datasets without the need of retraining the model.
arXiv Detail & Related papers (2021-05-25T20:44:28Z) - The Interpretable Dictionary in Sparse Coding [4.205692673448206]
In our work, we illustrate that an ANN, trained using sparse coding under specific sparsity constraints, yields a more interpretable model than the standard deep learning model.
The dictionary learned by sparse coding can be more easily understood and the activations of these elements creates a selective feature output.
arXiv Detail & Related papers (2020-11-24T00:26:40Z) - Towards Efficient Scene Understanding via Squeeze Reasoning [71.1139549949694]
We propose a novel framework called Squeeze Reasoning.
Instead of propagating information on the spatial map, we first learn to squeeze the input feature into a channel-wise global vector.
We show that our approach can be modularized as an end-to-end trained block and can be easily plugged into existing networks.
arXiv Detail & Related papers (2020-11-06T12:17:01Z) - Anchor & Transform: Learning Sparse Embeddings for Large Vocabularies [60.285091454321055]
We design a simple and efficient embedding algorithm that learns a small set of anchor embeddings and a sparse transformation matrix.
On text classification, language modeling, and movie recommendation benchmarks, we show that ANT is particularly suitable for large vocabulary sizes.
arXiv Detail & Related papers (2020-03-18T13:07:51Z) - Weakly-Supervised Semantic Segmentation by Iterative Affinity Learning [86.45526827323954]
Weakly-supervised semantic segmentation is a challenging task as no pixel-wise label information is provided for training.
We propose an iterative algorithm to learn such pairwise relations.
We show that the proposed algorithm performs favorably against the state-of-the-art methods.
arXiv Detail & Related papers (2020-02-19T10:32:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.