CNNTOP: a CNN-based Trajectory Owner Prediction Method
- URL: http://arxiv.org/abs/2001.01185v1
- Date: Sun, 5 Jan 2020 07:58:28 GMT
- Title: CNNTOP: a CNN-based Trajectory Owner Prediction Method
- Authors: Xucheng Luo, Shengyang Li, Yuxiang Peng
- Abstract summary: Trajectory owner prediction is the basis for many applications such as personalized recommendation, urban planning.
Existing methods mainly employ RNNs to model trajectories semantically.
We propose a CNN-based Trajectory Owner Prediction (CNNTOP) method.
- Score: 1.3793594968500604
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Trajectory owner prediction is the basis for many applications such as
personalized recommendation, urban planning. Although much effort has been put
on this topic, the results archived are still not good enough. Existing methods
mainly employ RNNs to model trajectories semantically due to the inherent
sequential attribute of trajectories. However, these approaches are weak at
Point of Interest (POI) representation learning and trajectory feature
detection. Thus, the performance of existing solutions is far from the
requirements of practical applications. In this paper, we propose a novel
CNN-based Trajectory Owner Prediction (CNNTOP) method. Firstly, we connect all
POI according to trajectories from all users. The result is a connected graph
that can be used to generate more informative POI sequences than other
approaches. Secondly, we employ the Node2Vec algorithm to encode each POI into
a low-dimensional real value vector. Then, we transform each trajectory into a
fixed-dimensional matrix, which is similar to an image. Finally, a CNN is
designed to detect features and predict the owner of a given trajectory. The
CNN can extract informative features from the matrix representations of
trajectories by convolutional operations, Batch normalization, and $K$-max
pooling operations. Extensive experiments on real datasets demonstrate that
CNNTOP substantially outperforms existing solutions in terms of
macro-Precision, macro-Recall, macro-F1, and accuracy.
Related papers
- Learning Useful Representations of Recurrent Neural Network Weight Matrices [30.583752432727326]
Recurrent Neural Networks (RNNs) are general-purpose parallel-sequential computers.
How to learn useful representations of RNN weights that facilitate RNN analysis as well as downstream tasks?
We consider several mechanistic approaches for RNN weights and adapt the permutation equivariant Deep Weight Space layer for RNNs.
Our two novel functionalist approaches extract information from RNN weights by 'interrogating' the RNN through probing inputs.
arXiv Detail & Related papers (2024-03-18T17:32:23Z) - Dynamic Semantic Compression for CNN Inference in Multi-access Edge
Computing: A Graph Reinforcement Learning-based Autoencoder [82.8833476520429]
We propose a novel semantic compression method, autoencoder-based CNN architecture (AECNN) for effective semantic extraction and compression in partial offloading.
In the semantic encoder, we introduce a feature compression module based on the channel attention mechanism in CNNs, to compress intermediate data by selecting the most informative features.
In the semantic decoder, we design a lightweight decoder to reconstruct the intermediate data through learning from the received compressed data to improve accuracy.
arXiv Detail & Related papers (2024-01-19T15:19:47Z) - Self Similarity Matrix based CNN Filter Pruning [1.6799377888527687]
We tackle the problem of CNN model pruning with the help of Self-Similarity Matrix (SSM) computed from the 2D CNN filters.
We propose two novel algorithms to rank and prune redundant filters which contribute similar activation maps to the output.
arXiv Detail & Related papers (2022-11-03T13:47:44Z) - Reward Shaping Using Convolutional Neural Network [13.098264947461432]
We propose a potential-based reward shaping mechanism using Convolutional Neural Network (CNN)
The proposed VIN-RS embeds a CNN trained on computed labels using the message passing mechanism of the Hidden Markov Model.
Our results illustrate promising improvements in the learning speed and maximum cumulative reward compared to the state-of-the-art.
arXiv Detail & Related papers (2022-10-30T21:28:22Z) - Large-Margin Representation Learning for Texture Classification [67.94823375350433]
This paper presents a novel approach combining convolutional layers (CLs) and large-margin metric learning for training supervised models on small datasets for texture classification.
The experimental results on texture and histopathologic image datasets have shown that the proposed approach achieves competitive accuracy with lower computational cost and faster convergence when compared to equivalent CNNs.
arXiv Detail & Related papers (2022-06-17T04:07:45Z) - Invertible Neural Networks for Graph Prediction [22.140275054568985]
In this work, we address conditional generation using deep invertible neural networks.
We adopt an end-to-end training approach since our objective is to address prediction and generation in the forward and backward processes at once.
arXiv Detail & Related papers (2022-06-02T17:28:33Z) - Learning from Images: Proactive Caching with Parallel Convolutional
Neural Networks [94.85780721466816]
A novel framework for proactive caching is proposed in this paper.
It combines model-based optimization with data-driven techniques by transforming an optimization problem into a grayscale image.
Numerical results show that the proposed scheme can reduce 71.6% computation time with only 0.8% additional performance cost.
arXiv Detail & Related papers (2021-08-15T21:32:47Z) - Adaptive Nearest Neighbor Machine Translation [60.97183408140499]
kNN-MT combines pre-trained neural machine translation with token-level k-nearest-neighbor retrieval.
Traditional kNN algorithm simply retrieves a same number of nearest neighbors for each target token.
We propose Adaptive kNN-MT to dynamically determine the number of k for each target token.
arXiv Detail & Related papers (2021-05-27T09:27:42Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - Convolutional Neural Nets: Foundations, Computations, and New
Applications [0.0]
CNNs are powerful machine learning models that highlight features from grid data to make predictions (regression and classification)
A common misconception is that CNNs are only capable of processing image or video data.
Here, we show how to apply CNNs to new types of applications such as optimal control, flow, monitoring, and molecular simulations.
arXiv Detail & Related papers (2021-01-13T04:20:42Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.