Probabilistic Regression for Visual Tracking
- URL: http://arxiv.org/abs/2003.12565v1
- Date: Fri, 27 Mar 2020 17:58:37 GMT
- Title: Probabilistic Regression for Visual Tracking
- Authors: Martin Danelljan, Luc Van Gool, Radu Timofte
- Abstract summary: We propose a probabilistic regression formulation and apply it to tracking.
Our network predicts the conditional probability density of the target state given an input image.
Our tracker sets a new state-of-the-art on six datasets, achieving 59.8% AUC on LaSOT and 75.8% Success on TrackingNet.
- Score: 193.05958682821444
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visual tracking is fundamentally the problem of regressing the state of the
target in each video frame. While significant progress has been achieved,
trackers are still prone to failures and inaccuracies. It is therefore crucial
to represent the uncertainty in the target estimation. Although current
prominent paradigms rely on estimating a state-dependent confidence score, this
value lacks a clear probabilistic interpretation, complicating its use.
In this work, we therefore propose a probabilistic regression formulation and
apply it to tracking. Our network predicts the conditional probability density
of the target state given an input image. Crucially, our formulation is capable
of modeling label noise stemming from inaccurate annotations and ambiguities in
the task. The regression network is trained by minimizing the Kullback-Leibler
divergence. When applied for tracking, our formulation not only allows a
probabilistic representation of the output, but also substantially improves the
performance. Our tracker sets a new state-of-the-art on six datasets, achieving
59.8% AUC on LaSOT and 75.8% Success on TrackingNet. The code and models are
available at https://github.com/visionml/pytracking.
Related papers
- Flexible Heteroscedastic Count Regression with Deep Double Poisson Networks [4.58556584533865]
We propose the Deep Double Poisson Network (DDPN) to produce accurate, input-conditional uncertainty representations.
DDPN vastly outperforms existing discrete models.
It can be applied to a variety of count regression datasets.
arXiv Detail & Related papers (2024-06-13T16:02:03Z) - Robust Visual Tracking via Iterative Gradient Descent and Threshold Selection [4.978166837959101]
We introduce a novel robust linear regression estimator, which achieves favorable performance when the error vector follows i.i.d Gaussian-Laplacian distribution.
In addition, we expend IGDTS to a generative tracker, and apply IGDTS-distance to measure the deviation between the sample and the model.
Experimental results on several challenging image sequences show that the proposed tracker outperformance existing trackers.
arXiv Detail & Related papers (2024-06-02T01:51:09Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Second-Moment Loss: A Novel Regression Objective for Improved
Uncertainties [7.766663822644739]
Quantification of uncertainty is one of the most promising approaches to establish safe machine learning.
One of the most commonly used approaches so far is Monte Carlo dropout, which is computationally cheap and easy to apply in practice.
We propose a new objective, referred to as second-moment loss ( UCI), to address this issue.
arXiv Detail & Related papers (2020-12-23T14:17:33Z) - Probing Model Signal-Awareness via Prediction-Preserving Input
Minimization [67.62847721118142]
We evaluate models' ability to capture the correct vulnerability signals to produce their predictions.
We measure the signal awareness of models using a new metric we propose- Signal-aware Recall (SAR)
The results show a sharp drop in the model's Recall from the high 90s to sub-60s with the new metric.
arXiv Detail & Related papers (2020-11-25T20:05:23Z) - Tracklets Predicting Based Adaptive Graph Tracking [51.352829280902114]
We present an accurate and end-to-end learning framework for multi-object tracking, namely textbfTPAGT.
It re-extracts the features of the tracklets in the current frame based on motion predicting, which is the key to solve the problem of features inconsistent.
arXiv Detail & Related papers (2020-10-18T16:16:49Z) - MetaDetect: Uncertainty Quantification and Prediction Quality Estimates
for Object Detection [6.230751621285322]
In object detection with deep neural networks, the box-wise objectness score tends to be overconfident.
We present a post processing method that for any given neural network provides predictive uncertainty estimates and quality estimates.
arXiv Detail & Related papers (2020-10-04T21:49:23Z) - PrognoseNet: A Generative Probabilistic Framework for Multimodal
Position Prediction given Context Information [2.5302126831371226]
We propose an approach which reformulates the prediction problem as a classification task, allowing for powerful tools.
A smart choice of the latent variable allows for the reformulation of the log-likelihood function as a combination of a classification problem and a much simplified regression problem.
The proposed approach can easily incorporate context information and does not require any preprocessing of the data.
arXiv Detail & Related papers (2020-10-02T06:13:41Z) - Learning a Unified Sample Weighting Network for Object Detection [113.98404690619982]
Region sampling or weighting is significantly important to the success of modern region-based object detectors.
We argue that sample weighting should be data-dependent and task-dependent.
We propose a unified sample weighting network to predict a sample's task weights.
arXiv Detail & Related papers (2020-06-11T16:19:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.