Human Pose Regression with Residual Log-likelihood Estimation
- URL: http://arxiv.org/abs/2107.11291v2
- Date: Mon, 26 Jul 2021 03:10:48 GMT
- Title: Human Pose Regression with Residual Log-likelihood Estimation
- Authors: Jiefeng Li, Siyuan Bian, Ailing Zeng, Can Wang, Bo Pang, Wentao Liu,
Cewu Lu
- Abstract summary: We propose a novel regression paradigm with Residual Log-likelihood Estimation (RLE) to capture the underlying output distribution.
RLE learns the change of the distribution instead of the unreferenced underlying distribution to facilitate the training process.
Compared to the conventional regression paradigm, regression with RLE bring 12.4 mAP improvement on MSCOCO without any test-time overhead.
- Score: 48.30425850653223
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Heatmap-based methods dominate in the field of human pose estimation by
modelling the output distribution through likelihood heatmaps. In contrast,
regression-based methods are more efficient but suffer from inferior
performance. In this work, we explore maximum likelihood estimation (MLE) to
develop an efficient and effective regression-based methods. From the
perspective of MLE, adopting different regression losses is making different
assumptions about the output density function. A density function closer to the
true distribution leads to a better regression performance. In light of this,
we propose a novel regression paradigm with Residual Log-likelihood Estimation
(RLE) to capture the underlying output distribution. Concretely, RLE learns the
change of the distribution instead of the unreferenced underlying distribution
to facilitate the training process. With the proposed reparameterization
design, our method is compatible with off-the-shelf flow models. The proposed
method is effective, efficient and flexible. We show its potential in various
human pose estimation tasks with comprehensive experiments. Compared to the
conventional regression paradigm, regression with RLE bring 12.4 mAP
improvement on MSCOCO without any test-time overhead. Moreover, for the first
time, especially on multi-person pose estimation, our regression method is
superior to the heatmap-based methods. Our code is available at
https://github.com/Jeff-sjtu/res-loglikelihood-regression
Related papers
- DistPred: A Distribution-Free Probabilistic Inference Method for Regression and Forecasting [14.390842560217743]
We propose a novel approach called DistPred for regression and forecasting tasks.
We transform proper scoring rules that measure the discrepancy between the predicted distribution and the target distribution into a differentiable discrete form.
This allows the model to sample numerous samples in a single forward pass to estimate the potential distribution of the response variable.
arXiv Detail & Related papers (2024-06-17T10:33:00Z) - Distributed High-Dimensional Quantile Regression: Estimation Efficiency and Support Recovery [0.0]
We focus on distributed estimation and support recovery for high-dimensional linear quantile regression.
We transform the original quantile regression into the least-squares optimization.
An efficient algorithm is developed, which enjoys high computation and communication efficiency.
arXiv Detail & Related papers (2024-05-13T08:32:22Z) - Regression-aware Inference with LLMs [52.764328080398805]
We show that an inference strategy can be sub-optimal for common regression and scoring evaluation metrics.
We propose alternate inference strategies that estimate the Bayes-optimal solution for regression and scoring metrics in closed-form from sampled responses.
arXiv Detail & Related papers (2024-03-07T03:24:34Z) - Engression: Extrapolation through the Lens of Distributional Regression [2.519266955671697]
We propose a neural network-based distributional regression methodology called engression'
An engression model is generative in the sense that we can sample from the fitted conditional distribution and is also suitable for high-dimensional outcomes.
We show that engression can successfully perform extrapolation under some assumptions such as monotonicity, whereas traditional regression approaches such as least-squares or quantile regression fall short under the same assumptions.
arXiv Detail & Related papers (2023-07-03T08:19:00Z) - A flexible empirical Bayes approach to multiple linear regression and connections with penalized regression [8.663322701649454]
We introduce a new empirical Bayes approach for large-scale multiple linear regression.
Our approach combines two key ideas: the use of flexible "adaptive shrinkage" priors and variational approximations.
We show that the posterior mean from our method solves a penalized regression problem.
arXiv Detail & Related papers (2022-08-23T12:42:57Z) - X-model: Improving Data Efficiency in Deep Learning with A Minimax Model [78.55482897452417]
We aim at improving data efficiency for both classification and regression setups in deep learning.
To take the power of both worlds, we propose a novel X-model.
X-model plays a minimax game between the feature extractor and task-specific heads.
arXiv Detail & Related papers (2021-10-09T13:56:48Z) - Regression Bugs Are In Your Model! Measuring, Reducing and Analyzing
Regressions In NLP Model Updates [68.09049111171862]
This work focuses on quantifying, reducing and analyzing regression errors in the NLP model updates.
We formulate the regression-free model updates into a constrained optimization problem.
We empirically analyze how model ensemble reduces regression.
arXiv Detail & Related papers (2021-05-07T03:33:00Z) - Rethinking the Heatmap Regression for Bottom-up Human Pose Estimation [63.623787834984206]
We propose the scale-adaptive heatmap regression (SAHR) method, which can adaptively adjust the standard deviation for each keypoint.
SAHR may aggravate the imbalance between fore-background samples, which potentially hurts the improvement of SAHR.
We also introduce the weight-adaptive heatmap regression (WAHR) to help balance the fore-background samples.
arXiv Detail & Related papers (2020-12-30T14:39:41Z) - Piecewise Linear Regression via a Difference of Convex Functions [50.89452535187813]
We present a new piecewise linear regression methodology that utilizes fitting a difference of convex functions (DC functions) to the data.
We empirically validate the method, showing it to be practically implementable, and to have comparable performance to existing regression/classification methods on real-world datasets.
arXiv Detail & Related papers (2020-07-05T18:58:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.