OpenTraj: Assessing Prediction Complexity in Human Trajectories Datasets
- URL: http://arxiv.org/abs/2010.00890v2
- Date: Mon, 2 Nov 2020 21:24:33 GMT
- Title: OpenTraj: Assessing Prediction Complexity in Human Trajectories Datasets
- Authors: Javad Amirian, Bingqing Zhang, Francisco Valente Castro, Juan Jose
Baldelomar, Jean-Bernard Hayet and Julien Pettre
- Abstract summary: Human Trajectory Prediction (HTP) has gained much momentum in the last years and many solutions have been proposed to solve it.
This paper addresses the question of evaluating how complex is a given dataset with respect to the prediction problem.
For assessing a dataset complexity, we define a series of indicators around three concepts: Trajectory predictability; Trajectory regularity; Context complexity.
- Score: 5.219568203653524
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human Trajectory Prediction (HTP) has gained much momentum in the last years
and many solutions have been proposed to solve it. Proper benchmarking being a
key issue for comparing methods, this paper addresses the question of
evaluating how complex is a given dataset with respect to the prediction
problem. For assessing a dataset complexity, we define a series of indicators
around three concepts: Trajectory predictability; Trajectory regularity;
Context complexity. We compare the most common datasets used in HTP in the
light of these indicators and discuss what this may imply on benchmarking of
HTP algorithms. Our source code is released on Github.
Related papers
- Regularization-Based Methods for Ordinal Quantification [49.606912965922504]
We study the ordinal case, i.e., the case in which a total order is defined on the set of n>2 classes.
We propose a novel class of regularized OQ algorithms, which outperforms existing algorithms in our experiments.
arXiv Detail & Related papers (2023-10-13T16:04:06Z) - A Gold Standard Dataset for the Reviewer Assignment Problem [117.59690218507565]
"Similarity score" is a numerical estimate of the expertise of a reviewer in reviewing a paper.
Our dataset consists of 477 self-reported expertise scores provided by 58 researchers.
For the task of ordering two papers in terms of their relevance for a reviewer, the error rates range from 12%-30% in easy cases to 36%-43% in hard cases.
arXiv Detail & Related papers (2023-03-23T16:15:03Z) - Local Evaluation of Time Series Anomaly Detection Algorithms [9.717823994163277]
We show that an adversary algorithm can reach high precision and recall on almost any dataset under weak assumption.
We propose a theoretically grounded, robust, parameter-free and interpretable extension to precision/recall metrics.
arXiv Detail & Related papers (2022-06-27T10:18:41Z) - Deep Probabilistic Graph Matching [72.6690550634166]
We propose a deep learning-based graph matching framework that works for the original QAP without compromising on the matching constraints.
The proposed method is evaluated on three popularly tested benchmarks (Pascal VOC, Willow Object and SPair-71k) and it outperforms all previous state-of-the-arts on all benchmarks.
arXiv Detail & Related papers (2022-01-05T13:37:27Z) - Are Missing Links Predictable? An Inferential Benchmark for Knowledge
Graph Completion [79.07695173192472]
InferWiki improves upon existing benchmarks in inferential ability, assumptions, and patterns.
Each testing sample is predictable with supportive data in the training set.
In experiments, we curate two settings of InferWiki varying in sizes and structures, and apply the construction process on CoDEx as comparative datasets.
arXiv Detail & Related papers (2021-08-03T09:51:15Z) - Comparative Analysis of Extreme Verification Latency Learning Algorithms [3.3439097577935213]
This paper is a comprehensive survey and comparative analysis of some of the EVL algorithms to point out the weaknesses and strengths of different approaches.
This work is a very first effort to provide a review of some of the existing algorithms in this field to the research community.
arXiv Detail & Related papers (2020-11-26T16:34:56Z) - Quantifying the Complexity of Standard Benchmarking Datasets for
Long-Term Human Trajectory Prediction [8.870188183999852]
We propose an approach for quantifying the amount of information contained in a dataset from a prototype-based dataset representation.
A large-scale complexity analysis is conducted on several human trajectory prediction benchmarking datasets.
arXiv Detail & Related papers (2020-05-28T12:00:41Z) - JHU-CROWD++: Large-Scale Crowd Counting Dataset and A Benchmark Method [92.15895515035795]
We introduce a new large scale unconstrained crowd counting dataset (JHU-CROWD++) that contains "4,372" images with "1.51 million" annotations.
We propose a novel crowd counting network that progressively generates crowd density maps via residual error estimation.
arXiv Detail & Related papers (2020-04-07T14:59:35Z) - Ambiguity in Sequential Data: Predicting Uncertain Futures with
Recurrent Models [110.82452096672182]
We propose an extension of the Multiple Hypothesis Prediction (MHP) model to handle ambiguous predictions with sequential data.
We also introduce a novel metric for ambiguous problems, which is better suited to account for uncertainties.
arXiv Detail & Related papers (2020-03-10T09:15:42Z) - Nonlinear Traffic Prediction as a Matrix Completion Problem with
Ensemble Learning [1.8352113484137629]
This paper addresses the problem of short-term traffic prediction for signalized traffic operations management.
We focus on predicting sensor states in high-resolution (second-by-second)
Our contributions can be summarized as offering three insights.
arXiv Detail & Related papers (2020-01-08T13:10:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.