HOTA: A Higher Order Metric for Evaluating Multi-Object Tracking
- URL: http://arxiv.org/abs/2009.07736v2
- Date: Tue, 29 Sep 2020 10:40:09 GMT
- Title: HOTA: A Higher Order Metric for Evaluating Multi-Object Tracking
- Authors: Jonathon Luiten, Aljosa Osep, Patrick Dendorfer, Philip Torr, Andreas
Geiger, Laura Leal-Taixe, Bastian Leibe
- Abstract summary: Multi-Object Tracking (MOT) has been notoriously difficult to evaluate.
Previous metrics overemphasize the importance of either detection or association.
We present a novel MOT evaluation metric, HOTA, which balances the effect of performing accurate detection, association and localization.
- Score: 48.497889944886516
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-Object Tracking (MOT) has been notoriously difficult to evaluate.
Previous metrics overemphasize the importance of either detection or
association. To address this, we present a novel MOT evaluation metric, HOTA
(Higher Order Tracking Accuracy), which explicitly balances the effect of
performing accurate detection, association and localization into a single
unified metric for comparing trackers. HOTA decomposes into a family of
sub-metrics which are able to evaluate each of five basic error types
separately, which enables clear analysis of tracking performance. We evaluate
the effectiveness of HOTA on the MOTChallenge benchmark, and show that it is
able to capture important aspects of MOT performance not previously taken into
account by established metrics. Furthermore, we show HOTA scores better align
with human visual evaluation of tracking performance.
Related papers
- Guardians of the Machine Translation Meta-Evaluation: Sentinel Metrics Fall In! [80.3129093617928]
Annually, at the Conference of Machine Translation (WMT), the Metrics Shared Task organizers conduct the meta-evaluation of Machine Translation (MT) metrics.
This work highlights two issues with the meta-evaluation framework currently employed in WMT, and assesses their impact on the metrics rankings.
We introduce the concept of sentinel metrics, which are designed explicitly to scrutinize the meta-evaluation process's accuracy, robustness, and fairness.
arXiv Detail & Related papers (2024-08-25T13:29:34Z) - Machine Translation Meta Evaluation through Translation Accuracy
Challenge Sets [92.38654521870444]
We introduce ACES, a contrastive challenge set spanning 146 language pairs.
This dataset aims to discover whether metrics can identify 68 translation accuracy errors.
We conduct a large-scale study by benchmarking ACES on 50 metrics submitted to the WMT 2022 and 2023 metrics shared tasks.
arXiv Detail & Related papers (2024-01-29T17:17:42Z) - Joint Metrics Matter: A Better Standard for Trajectory Forecasting [67.1375677218281]
Multi-modal trajectory forecasting methods evaluate using single-agent metrics (marginal metrics)
Only focusing on marginal metrics can lead to unnatural predictions, such as colliding trajectories or diverging trajectories for people who are clearly walking together as a group.
We present the first comprehensive evaluation of state-of-the-art trajectory forecasting methods with respect to multi-agent metrics (joint metrics): JADE, JFDE, and collision rate.
arXiv Detail & Related papers (2023-05-10T16:27:55Z) - Extrinsic Evaluation of Machine Translation Metrics [78.75776477562087]
It is unclear if automatic metrics are reliable at distinguishing good translations from bad translations at the sentence level.
We evaluate the segment-level performance of the most widely used MT metrics (chrF, COMET, BERTScore, etc.) on three downstream cross-lingual tasks.
Our experiments demonstrate that all metrics exhibit negligible correlation with the extrinsic evaluation of the downstream outcomes.
arXiv Detail & Related papers (2022-12-20T14:39:58Z) - Tracking Every Thing in the Wild [61.917043381836656]
We introduce a new metric, Track Every Thing Accuracy (TETA), breaking tracking measurement into three sub-factors: localization, association, and classification.
Our experiments show that TETA evaluates trackers more comprehensively, and TETer achieves significant improvements on the challenging large-scale datasets BDD100K and TAO.
arXiv Detail & Related papers (2022-07-26T15:37:19Z) - mvHOTA: A multi-view higher order tracking accuracy metric to measure
spatial and temporal associations in multi-point detection [1.039718070553655]
Multi-object tracking (MOT) is a challenging task that involves detecting objects in the scene and tracking them across a sequence of frames.
The main evaluation metric to benchmark MOT methods on datasets such as KITTI has recently become the higher order tracking accuracy (HOTA) metric.
We propose a multi-view higher order tracking metric (mvHOTA) to determine the accuracy of multi-point (multi-instance and multi-class) detection.
arXiv Detail & Related papers (2022-06-19T10:31:53Z) - On the detection-to-track association for online multi-object tracking [30.883165972525347]
We propose a hybrid track association algorithm that models the historical appearance distances of a track with an incremental Gaussian mixture model (IGMM)
Experimental results on three MOT benchmarks confirm that HTA effectively improves the target identification performance with a small compromise to the tracking speed.
arXiv Detail & Related papers (2021-07-01T14:44:12Z) - SQE: a Self Quality Evaluation Metric for Parameters Optimization in
Multi-Object Tracking [25.723436561224297]
We present a novel self quality evaluation metric SQE for parameters optimization in the challenging yet critical multi-object tracking task.
By contrast, our metric reflects the internal characteristics of trajectory hypotheses and measures tracking performance without ground truth.
arXiv Detail & Related papers (2020-04-16T06:07:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.