A Probabilistic Rotation Representation for Symmetric Shapes With an
Efficiently Computable Bingham Loss Function
- URL: http://arxiv.org/abs/2305.18947v1
- Date: Tue, 30 May 2023 11:26:18 GMT
- Title: A Probabilistic Rotation Representation for Symmetric Shapes With an
Efficiently Computable Bingham Loss Function
- Authors: Hiroya Sato, Takuya Ikeda, Koichi Nishiwaki
- Abstract summary: We introduce a fast-computable and easy-to-implement NLL loss function for Bingham distribution.
Our loss function can capture the symmetric property of target objects from their point clouds.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, a deep learning framework has been widely used for object
pose estimation. While quaternion is a common choice for rotation
representation, it cannot represent the ambiguity of the observation. In order
to handle the ambiguity, the Bingham distribution is one promising solution.
However, it requires complicated calculation when yielding the negative
log-likelihood (NLL) loss. An alternative easy-to-implement loss function has
been proposed to avoid complex computations but has difficulty expressing
symmetric distribution. In this paper, we introduce a fast-computable and
easy-to-implement NLL loss function for Bingham distribution. We also create
the inference network and show that our loss function can capture the symmetric
property of target objects from their point clouds.
Related papers
- Revisiting Rotation Averaging: Uncertainties and Robust Losses [51.64986160468128]
We argue that the main problem of current methods is the minimized cost function that is only weakly connected with the input data via the estimated epipolar.
We propose to better model the underlying noise distributions by directly propagating the uncertainty from the point correspondences into the rotation averaging.
arXiv Detail & Related papers (2023-03-09T11:51:20Z) - Implicit Neural Representation for Mesh-Free Inverse Obstacle Scattering [21.459567997723376]
Implicit representation of shapes as level sets of multilayer perceptrons has recently flourished in different shape analysis, compression, and reconstruction tasks.
We introduce an implicit neural representation-based framework for solving the inverse obstacle scattering problem in a mesh-free fashion.
arXiv Detail & Related papers (2022-06-04T17:16:09Z) - Probabilistic Rotation Representation With an Efficiently Computable
Bingham Loss Function and Its Application to Pose Estimation [0.0]
We propose a fast-computable and easy-to-implement loss function for Bingham distribution.
We also show not only to examine the parametrization of Bingham distribution but also an application based on our loss function.
arXiv Detail & Related papers (2022-03-09T00:38:28Z) - Robust Estimation for Nonparametric Families via Generative Adversarial
Networks [92.64483100338724]
We provide a framework for designing Generative Adversarial Networks (GANs) to solve high dimensional robust statistics problems.
Our work extend these to robust mean estimation, second moment estimation, and robust linear regression.
In terms of techniques, our proposed GAN losses can be viewed as a smoothed and generalized Kolmogorov-Smirnov distance.
arXiv Detail & Related papers (2022-02-02T20:11:33Z) - Learning with Noisy Labels via Sparse Regularization [76.31104997491695]
Learning with noisy labels is an important task for training accurate deep neural networks.
Some commonly-used loss functions, such as Cross Entropy (CE), suffer from severe overfitting to noisy labels.
We introduce the sparse regularization strategy to approximate the one-hot constraint.
arXiv Detail & Related papers (2021-07-31T09:40:23Z) - Shaping Deep Feature Space towards Gaussian Mixture for Visual
Classification [74.48695037007306]
We propose a Gaussian mixture (GM) loss function for deep neural networks for visual classification.
With a classification margin and a likelihood regularization, the GM loss facilitates both high classification performance and accurate modeling of the feature distribution.
The proposed model can be implemented easily and efficiently without using extra trainable parameters.
arXiv Detail & Related papers (2020-11-18T03:32:27Z) - All your loss are belong to Bayes [28.393499629583786]
Loss functions are a cornerstone of machine learning and the starting point of most algorithms.
We introduce a trick on squared Gaussian Processes to obtain a random process whose paths are compliant source functions.
Experimental results demonstrate substantial improvements over the state of the art.
arXiv Detail & Related papers (2020-06-08T14:31:21Z) - AMC-Loss: Angular Margin Contrastive Loss for Improved Explainability in
Image Classification [8.756814963313804]
Angular Margin Contrastive Loss (AMC-Loss) is a new loss function to be used along with the traditional cross-entropy loss.
AMC-Loss employs the discriminative angular distance metric that is equivalent to geodesic distance on a hypersphere manifold.
We find that although the proposed geometrically constrained loss-function improves quantitative results modestly, it has a qualitatively surprisingly beneficial effect on increasing the interpretability of deep-net decisions.
arXiv Detail & Related papers (2020-04-21T08:03:14Z) - Spatially Adaptive Inference with Stochastic Feature Sampling and
Interpolation [72.40827239394565]
We propose to compute features only at sparsely sampled locations.
We then densely reconstruct the feature map with an efficient procedure.
The presented network is experimentally shown to save substantial computation while maintaining accuracy over a variety of computer vision tasks.
arXiv Detail & Related papers (2020-03-19T15:36:31Z) - Semiparametric Nonlinear Bipartite Graph Representation Learning with
Provable Guarantees [106.91654068632882]
We consider the bipartite graph and formalize its representation learning problem as a statistical estimation problem of parameters in a semiparametric exponential family distribution.
We show that the proposed objective is strongly convex in a neighborhood around the ground truth, so that a gradient descent-based method achieves linear convergence rate.
Our estimator is robust to any model misspecification within the exponential family, which is validated in extensive experiments.
arXiv Detail & Related papers (2020-03-02T16:40:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.