Optimizing Vessel Trajectory Compression
- URL: http://arxiv.org/abs/2005.05418v1
- Date: Mon, 11 May 2020 20:38:56 GMT
- Title: Optimizing Vessel Trajectory Compression
- Authors: Giannis Fikioris, Kostas Patroumpas, Alexander Artikis
- Abstract summary: In previous work we introduced a trajectory detection module that can provide summarized representations of vessel trajectories by consuming AIS positional messages online.
This methodology can provide reliable trajectory synopses with little deviations from the original course by discarding at least 70% of the raw data as redundant.
However, such trajectory compression is very sensitive to parametrization.
We take into account the type of each vessel in order to provide a suitable configuration that can yield improved trajectory synopses.
- Score: 71.42030830910227
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In previous work we introduced a trajectory detection module that can provide
summarized representations of vessel trajectories by consuming AIS positional
messages online. This methodology can provide reliable trajectory synopses with
little deviations from the original course by discarding at least 70% of the
raw data as redundant. However, such trajectory compression is very sensitive
to parametrization. In this paper, our goal is to fine-tune the selection of
these parameter values. We take into account the type of each vessel in order
to provide a suitable configuration that can yield improved trajectory
synopses, both in terms of approximation error and compression ratio.
Furthermore, we employ a genetic algorithm converging to a suitable
configuration per vessel type. Our tests against a publicly available AIS
dataset have shown that compression efficiency is comparable or even better
than the one with default parametrization without resorting to a laborious data
inspection.
Related papers
- Distributed Nonparametric Estimation: from Sparse to Dense Samples per Terminal [9.766173684831324]
We characterize the minimax optimal rates for all regimes, and identify phase transitions of the optimal rates as the samples per terminal vary from sparse to dense.
This fully solves the problem left open by previous works, whose scopes are limited to regimes with either dense samples or a single sample per terminal.
The optimal rates are immediate for various special cases such as density estimation, Gaussian, binary, Poisson and heteroskedastic regression models.
arXiv Detail & Related papers (2025-01-14T06:41:55Z) - Dataset Distillation as Pushforward Optimal Quantization [1.039189397779466]
We propose a simple extension of the state-of-the-art data distillation method D4M, achieving better performance on the ImageNet-1K dataset with trivial additional computation.
We demonstrate that when equipped with an encoder-decoder structure, the empirically successful disentangled methods can be reformulated as an optimal quantization problem.
In particular, we link existing disentangled dataset distillation methods to the classical optimal quantization and Wasserstein barycenter problems, demonstrating consistency of distilled datasets for diffusion-based generative priors.
arXiv Detail & Related papers (2025-01-13T20:41:52Z) - Accelerated Methods with Compressed Communications for Distributed Optimization Problems under Data Similarity [55.03958223190181]
We propose the first theoretically grounded accelerated algorithms utilizing unbiased and biased compression under data similarity.
Our results are of record and confirmed by experiments on different average losses and datasets.
arXiv Detail & Related papers (2024-12-21T00:40:58Z) - ALoRE: Efficient Visual Adaptation via Aggregating Low Rank Experts [71.91042186338163]
ALoRE is a novel PETL method that reuses the hypercomplex parameterized space constructed by Kronecker product to Aggregate Low Rank Experts.
Thanks to the artful design, ALoRE maintains negligible extra parameters and can be effortlessly merged into the frozen backbone.
arXiv Detail & Related papers (2024-12-11T12:31:30Z) - Automatic Outlier Rectification via Optimal Transport [7.421153752627664]
We propose a novel conceptual framework to detect outliers using optimal transport with a concave cost function.
We take the first step to utilize the optimal transport distance with a concave cost function to construct a rectification set.
Then, we select the best distribution within the rectification set to perform the estimation task.
arXiv Detail & Related papers (2024-03-21T01:30:24Z) - Compression of Structured Data with Autoencoders: Provable Benefit of
Nonlinearities and Depth [83.15263499262824]
We prove that gradient descent converges to a solution that completely disregards the sparse structure of the input.
We show how to improve upon Gaussian performance for the compression of sparse data by adding a denoising function to a shallow architecture.
We validate our findings on image datasets, such as CIFAR-10 and MNIST.
arXiv Detail & Related papers (2024-02-07T16:32:29Z) - Minimizing the Accumulated Trajectory Error to Improve Dataset
Distillation [151.70234052015948]
We propose a novel approach that encourages the optimization algorithm to seek a flat trajectory.
We show that the weights trained on synthetic data are robust against the accumulated errors perturbations with the regularization towards the flat trajectory.
Our method, called Flat Trajectory Distillation (FTD), is shown to boost the performance of gradient-matching methods by up to 4.7%.
arXiv Detail & Related papers (2022-11-20T15:49:11Z) - Unbalanced CO-Optimal Transport [16.9451175221198]
CO-optimal transport (COOT) takes this comparison further by inferring an alignment between features as well.
We show that it is sensitive to outliers that are omnipresent in real-world data.
This prompts us to propose unbalanced COOT for which we provably show its robustness to noise.
arXiv Detail & Related papers (2022-05-30T08:43:19Z) - Feasible Low-thrust Trajectory Identification via a Deep Neural Network
Classifier [1.5076964620370268]
This work proposes a deep neural network (DNN) to accurately identify feasible low thrust transfer prior to the optimization process.
The DNN-classifier achieves an overall accuracy of 97.9%, which has the best performance among the tested algorithms.
arXiv Detail & Related papers (2022-02-10T11:34:37Z) - Deep Shells: Unsupervised Shape Correspondence with Optimal Transport [52.646396621449]
We propose a novel unsupervised learning approach to 3D shape correspondence.
We show that the proposed method significantly improves over the state-of-the-art on multiple datasets.
arXiv Detail & Related papers (2020-10-28T22:24:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.