Self-Correctable and Adaptable Inference for Generalizable Human Pose
Estimation
- URL: http://arxiv.org/abs/2303.11180v2
- Date: Sat, 25 Mar 2023 07:20:37 GMT
- Title: Self-Correctable and Adaptable Inference for Generalizable Human Pose
Estimation
- Authors: Zhehan Kan, Shuoshuo Chen, Ce Zhang, Yushun Tang, Zhihai He
- Abstract summary: We introduce a self-correctable and adaptable inference (SCAI) method to address the generalization challenge of network prediction.
We show that the proposed SCAI method is able to significantly improve the generalization capability and performance of human pose estimation.
- Score: 25.14459820592431
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A central challenge in human pose estimation, as well as in many other
machine learning and prediction tasks, is the generalization problem. The
learned network does not have the capability to characterize the prediction
error, generate feedback information from the test sample, and correct the
prediction error on the fly for each individual test sample, which results in
degraded performance in generalization. In this work, we introduce a
self-correctable and adaptable inference (SCAI) method to address the
generalization challenge of network prediction and use human pose estimation as
an example to demonstrate its effectiveness and performance. We learn a
correction network to correct the prediction result conditioned by a fitness
feedback error. This feedback error is generated by a learned fitness feedback
network which maps the prediction result to the original input domain and
compares it against the original input. Interestingly, we find that this
self-referential feedback error is highly correlated with the actual prediction
error. This strong correlation suggests that we can use this error as feedback
to guide the correction process. It can be also used as a loss function to
quickly adapt and optimize the correction network during the inference process.
Our extensive experimental results on human pose estimation demonstrate that
the proposed SCAI method is able to significantly improve the generalization
capability and performance of human pose estimation.
Related papers
- Feature Perturbation Augmentation for Reliable Evaluation of Importance
Estimators in Neural Networks [5.439020425819001]
Post-hoc interpretability methods attempt to make the inner workings of deep neural networks more interpretable.
One of the most popular evaluation frameworks is to perturb features deemed important by an interpretability method.
We propose feature perturbation augmentation (FPA) which creates and adds perturbed images during the model training.
arXiv Detail & Related papers (2023-03-02T19:05:46Z) - Improving Adaptive Conformal Prediction Using Self-Supervised Learning [72.2614468437919]
We train an auxiliary model with a self-supervised pretext task on top of an existing predictive model and use the self-supervised error as an additional feature to estimate nonconformity scores.
We empirically demonstrate the benefit of the additional information using both synthetic and real data on the efficiency (width), deficit, and excess of conformal prediction intervals.
arXiv Detail & Related papers (2023-02-23T18:57:14Z) - Modeling Uncertain Feature Representation for Domain Generalization [49.129544670700525]
We show that our method consistently improves the network generalization ability on multiple vision tasks.
Our methods are simple yet effective and can be readily integrated into networks without additional trainable parameters or loss constraints.
arXiv Detail & Related papers (2023-01-16T14:25:02Z) - Generalizability Analysis of Graph-based Trajectory Predictor with
Vectorized Representation [29.623692599892365]
Trajectory prediction is one of the essential tasks for autonomous vehicles.
Recent progress in machine learning gave birth to a series of advanced trajectory prediction algorithms.
arXiv Detail & Related papers (2022-08-06T20:19:52Z) - Perturbed and Strict Mean Teachers for Semi-supervised Semantic
Segmentation [22.5935068122522]
In this paper, we address the prediction accuracy problem of consistency learning methods with novel extensions of the mean-teacher (MT) model.
The accurate prediction by this model allows us to use a challenging combination of network, input data and feature perturbations to improve the consistency learning generalisation.
Results on public benchmarks show that our approach achieves remarkable improvements over the previous SOTA methods in the field.
arXiv Detail & Related papers (2021-11-25T04:30:56Z) - Understanding the Generalization of Adam in Learning Neural Networks
with Proper Regularization [118.50301177912381]
We show that Adam can converge to different solutions of the objective with provably different errors, even with weight decay globalization.
We show that if convex, and the weight decay regularization is employed, any optimization algorithms including Adam will converge to the same solution.
arXiv Detail & Related papers (2021-08-25T17:58:21Z) - Predicting Deep Neural Network Generalization with Perturbation Response
Curves [58.8755389068888]
We propose a new framework for evaluating the generalization capabilities of trained networks.
Specifically, we introduce two new measures for accurately predicting generalization gaps.
We attain better predictive scores than the current state-of-the-art measures on a majority of tasks in the Predicting Generalization in Deep Learning (PGDL) NeurIPS 2020 competition.
arXiv Detail & Related papers (2021-06-09T01:37:36Z) - Generalized Adversarial Distances to Efficiently Discover Classifier
Errors [0.0]
High-confidence errors are rare events for which the model is highly confident in its prediction, but is wrong.
We propose a generalization to the Adversarial Distance search that leverages concepts from adversarial machine learning.
Experimental results show that the generalized method finds errors at rates greater than expected given the confidence of the sampled predictions.
arXiv Detail & Related papers (2021-02-25T13:31:21Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Adversarial Refinement Network for Human Motion Prediction [61.50462663314644]
Two popular methods, recurrent neural networks and feed-forward deep networks, are able to predict rough motion trend.
We propose an Adversarial Refinement Network (ARNet) following a simple yet effective coarse-to-fine mechanism with novel adversarial error augmentation.
arXiv Detail & Related papers (2020-11-23T05:42:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.