On the Calibration of Human Pose Estimation
- URL: http://arxiv.org/abs/2311.17105v1
- Date: Tue, 28 Nov 2023 09:31:09 GMT
- Title: On the Calibration of Human Pose Estimation
- Authors: Kerui Gu, Rongyu Chen, Angela Yao
- Abstract summary: Calibrated ConfidenceNet (CCNet) is a light-weight post-hoc addition that improves AP by up to 1.4% on off-the-shelf pose estimation frameworks.
applied to the downstream task of mesh recovery, CCNet facilitates an additional 1.0mm decrease in 3D keypoint error.
- Score: 39.15814732856338
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most 2D human pose estimation frameworks estimate keypoint confidence in an
ad-hoc manner, using heuristics such as the maximum value of heatmaps. The
confidence is part of the evaluation scheme, e.g., AP for the MSCOCO dataset,
yet has been largely overlooked in the development of state-of-the-art methods.
This paper takes the first steps in addressing miscalibration in pose
estimation. From a calibration point of view, the confidence should be aligned
with the pose accuracy. In practice, existing methods are poorly calibrated. We
show, through theoretical analysis, why a miscalibration gap exists and how to
narrow the gap. Simply predicting the instance size and adjusting the
confidence function gives considerable AP improvements. Given the black-box
nature of deep neural networks, however, it is not possible to fully close this
gap with only closed-form adjustments. As such, we go one step further and
learn network-specific adjustments by enforcing consistency between confidence
and pose accuracy. Our proposed Calibrated ConfidenceNet (CCNet) is a
light-weight post-hoc addition that improves AP by up to 1.4% on off-the-shelf
pose estimation frameworks. Applied to the downstream task of mesh recovery,
CCNet facilitates an additional 1.0mm decrease in 3D keypoint error.
Related papers
- Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - Sample-dependent Adaptive Temperature Scaling for Improved Calibration [95.7477042886242]
Post-hoc approach to compensate for neural networks being wrong is to perform temperature scaling.
We propose to predict a different temperature value for each input, allowing us to adjust the mismatch between confidence and accuracy.
We test our method on the ResNet50 and WideResNet28-10 architectures using the CIFAR10/100 and Tiny-ImageNet datasets.
arXiv Detail & Related papers (2022-07-13T14:13:49Z) - Confidence Calibration for Intent Detection via Hyperspherical Space and
Rebalanced Accuracy-Uncertainty Loss [17.26964140836123]
In some scenarios, users do not only care about the accuracy but also the confidence of model.
We propose a model using the hyperspherical space and rebalanced accuracy-uncertainty loss.
Our model outperforms the existing calibration methods and achieves a significant improvement on the calibration metric.
arXiv Detail & Related papers (2022-03-17T12:01:33Z) - Bayesian Confidence Calibration for Epistemic Uncertainty Modelling [4.358626952482686]
We introduce a framework to obtain confidence estimates in conjunction with an uncertainty of the calibration method.
We achieve state-of-the-art calibration performance for object detection calibration.
arXiv Detail & Related papers (2021-09-21T10:53:16Z) - Uncertainty-Aware Camera Pose Estimation from Points and Lines [101.03675842534415]
Perspective-n-Point-and-Line (Pn$PL) aims at fast, accurate and robust camera localizations with respect to a 3D model from 2D-3D feature coordinates.
arXiv Detail & Related papers (2021-07-08T15:19:36Z) - Uncertainty-Aware Deep Calibrated Salient Object Detection [74.58153220370527]
Existing deep neural network based salient object detection (SOD) methods mainly focus on pursuing high network accuracy.
These methods overlook the gap between network accuracy and prediction confidence, known as the confidence uncalibration problem.
We introduce an uncertaintyaware deep SOD network, and propose two strategies to prevent deep SOD networks from being overconfident.
arXiv Detail & Related papers (2020-12-10T23:28:36Z) - Calibrating Deep Neural Networks using Focal Loss [77.92765139898906]
Miscalibration is a mismatch between a model's confidence and its correctness.
We show that focal loss allows us to learn models that are already very well calibrated.
We show that our approach achieves state-of-the-art calibration without compromising on accuracy in almost all cases.
arXiv Detail & Related papers (2020-02-21T17:35:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.