Probabilistic Regression with Huber Distributions
- URL: http://arxiv.org/abs/2111.10296v1
- Date: Fri, 19 Nov 2021 16:12:15 GMT
- Title: Probabilistic Regression with Huber Distributions
- Authors: David Mohlin, Gerald Bianchi, Josephine Sullivan
- Abstract summary: We describe a probabilistic method for estimating the position of an object along with its covariance matrix using neural networks.
Our method is designed to be robust to outliers, have bounded gradients with respect to the network outputs, among other desirable properties.
We evaluate our method on popular body pose and facial landmark datasets and get performance on par or exceeding the performance of non-heatmap methods.
- Score: 6.681943980068051
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we describe a probabilistic method for estimating the position
of an object along with its covariance matrix using neural networks. Our method
is designed to be robust to outliers, have bounded gradients with respect to
the network outputs, among other desirable properties. To achieve this we
introduce a novel probability distribution inspired by the Huber loss. We also
introduce a new way to parameterize positive definite matrices to ensure
invariance to the choice of orientation for the coordinate system we regress
over. We evaluate our method on popular body pose and facial landmark datasets
and get performance on par or exceeding the performance of non-heatmap methods.
Our code is available at
github.com/Davmo049/Public_prob_regression_with_huber_distributions
Related papers
- BALI: Learning Neural Networks via Bayesian Layerwise Inference [6.7819070167076045]
We introduce a new method for learning Bayesian neural networks, treating them as a stack of multivariate Bayesian linear regression models.
The main idea is to infer the layerwise posterior exactly if we know the target outputs of each layer.
We define these pseudo-targets as the layer outputs from the forward pass, updated by the backpropagated of the objective function.
arXiv Detail & Related papers (2024-11-18T22:18:34Z) - Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.
We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - Collaborative Heterogeneous Causal Inference Beyond Meta-analysis [68.4474531911361]
We propose a collaborative inverse propensity score estimator for causal inference with heterogeneous data.
Our method shows significant improvements over the methods based on meta-analysis when heterogeneity increases.
arXiv Detail & Related papers (2024-04-24T09:04:36Z) - Beyond the Known: Adversarial Autoencoders in Novelty Detection [2.7486022583843233]
In novelty detection, the goal is to decide if a new data point should be categorized as an inlier or an outlier.
We use a similar framework but with a lightweight deep network, and we adopt a probabilistic score with reconstruction error.
Our results indicate that our approach is effective at learning the target class, and it outperforms recent state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2024-04-06T00:04:19Z) - Uncertainty Quantification via Stable Distribution Propagation [60.065272548502]
We propose a new approach for propagating stable probability distributions through neural networks.
Our method is based on local linearization, which we show to be an optimal approximation in terms of total variation distance for the ReLU non-linearity.
arXiv Detail & Related papers (2024-02-13T09:40:19Z) - Probabilistic 3d regression with projected huber distribution [5.888646114353371]
We show that our method produces uncertainties which correlate well with empirical errors.
We also show that the mode of the predicted distribution outperform our regression baselines.
arXiv Detail & Related papers (2023-03-09T13:32:18Z) - Generalized Differentiable RANSAC [95.95627475224231]
$nabla$-RANSAC is a differentiable RANSAC that allows learning the entire randomized robust estimation pipeline.
$nabla$-RANSAC is superior to the state-of-the-art in terms of accuracy while running at a similar speed to its less accurate alternatives.
arXiv Detail & Related papers (2022-12-26T15:13:13Z) - Invariance Learning in Deep Neural Networks with Differentiable Laplace
Approximations [76.82124752950148]
We develop a convenient gradient-based method for selecting the data augmentation.
We use a differentiable Kronecker-factored Laplace approximation to the marginal likelihood as our objective.
arXiv Detail & Related papers (2022-02-22T02:51:11Z) - Multivariate Probabilistic Regression with Natural Gradient Boosting [63.58097881421937]
We propose a Natural Gradient Boosting (NGBoost) approach based on nonparametrically modeling the conditional parameters of the multivariate predictive distribution.
Our method is robust, works out-of-the-box without extensive tuning, is modular with respect to the assumed target distribution, and performs competitively in comparison to existing approaches.
arXiv Detail & Related papers (2021-06-07T17:44:49Z) - Learning Representations using Spectral-Biased Random Walks on Graphs [18.369974607582584]
We study how much a probabilistic bias in this process affects the quality of the nodes picked by the process.
We succinctly capture this neighborhood as a probability measure based on the spectrum of the node's neighborhood subgraph represented as a normalized laplacian matrix.
We empirically evaluate our approach against several state-of-the-art node embedding techniques on a wide variety of real-world datasets.
arXiv Detail & Related papers (2020-05-19T20:42:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.