Geometry-Aware Hierarchical Bayesian Learning on Manifolds
- URL: http://arxiv.org/abs/2111.00184v1
- Date: Sat, 30 Oct 2021 05:47:05 GMT
- Title: Geometry-Aware Hierarchical Bayesian Learning on Manifolds
- Authors: Yonghui Fan, Yalin Wang
- Abstract summary: We propose a hierarchical Bayesian learning model for learning on manifold-valued vision data.
We first introduce a kernel with the properties of geometry-awareness and intra- Kernel convolution.
We then use a Gaussian process regression to organize the inputs and finally implement a hierarchical Bayesian network for the feature aggregation.
- Score: 5.182379239800725
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Bayesian learning with Gaussian processes demonstrates encouraging regression
and classification performances in solving computer vision tasks. However,
Bayesian methods on 3D manifold-valued vision data, such as meshes and point
clouds, are seldom studied. One of the primary challenges is how to effectively
and efficiently aggregate geometric features from the irregular inputs. In this
paper, we propose a hierarchical Bayesian learning model to address this
challenge. We initially introduce a kernel with the properties of
geometry-awareness and intra-kernel convolution. This enables geometrically
reasonable inferences on manifolds without using any specific hand-crafted
feature descriptors. Then, we use a Gaussian process regression to organize the
inputs and finally implement a hierarchical Bayesian network for the feature
aggregation. Furthermore, we incorporate the feature learning of neural
networks with the feature aggregation of Bayesian models to investigate the
feasibility of jointly learning on manifolds. Experimental results not only
show that our method outperforms existing Bayesian methods on manifolds but
also demonstrate the prospect of coupling neural networks with Bayesian
networks.
Related papers
- Exploring the Manifold of Neural Networks Using Diffusion Geometry [7.038126249994092]
We learn manifold where datapoints are neural networks by introducing a distance between the hidden layer representations of the neural networks.
These distances are then fed to the non-linear dimensionality reduction algorithm PHATE to create a manifold of neural networks.
Our analysis reveals that high-performing networks cluster together in the manifold, displaying consistent embedding patterns.
arXiv Detail & Related papers (2024-11-19T16:34:45Z) - Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond [61.18736646013446]
In pursuit of a deeper understanding of its surprising behaviors, we investigate the utility of a simple yet accurate model of a trained neural network.
Across three case studies, we illustrate how it can be applied to derive new empirical insights on a diverse range of prominent phenomena.
arXiv Detail & Related papers (2024-10-31T22:54:34Z) - SpaceMesh: A Continuous Representation for Learning Manifold Surface Meshes [61.110517195874074]
We present a scheme to directly generate manifold, polygonal meshes of complex connectivity as the output of a neural network.
Our key innovation is to define a continuous latent connectivity space at each mesh, which implies the discrete mesh.
In applications, this approach not only yields high-quality outputs from generative models, but also enables directly learning challenging geometry processing tasks such as mesh repair.
arXiv Detail & Related papers (2024-09-30T17:59:03Z) - Convergence Analysis for Deep Sparse Coding via Convolutional Neural Networks [7.956678963695681]
We introduce a novel class of Deep Sparse Coding (DSC) models.
We derive convergence rates for CNNs in their ability to extract sparse features.
Inspired by the strong connection between sparse coding and CNNs, we explore training strategies to encourage neural networks to learn more sparse features.
arXiv Detail & Related papers (2024-08-10T12:43:55Z) - Automatic Discovery of Visual Circuits [66.99553804855931]
We explore scalable methods for extracting the subgraph of a vision model's computational graph that underlies recognition of a specific visual concept.
We find that our approach extracts circuits that causally affect model output, and that editing these circuits can defend large pretrained models from adversarial attacks.
arXiv Detail & Related papers (2024-04-22T17:00:57Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Bayesian graph convolutional neural networks via tempered MCMC [0.41998444721319217]
Deep learning models, such as convolutional neural networks, have long been applied to image and multi-media tasks.
More recently, there has been more attention to unstructured data that can be represented via graphs.
These types of data are often found in health and medicine, social networks, and research data repositories.
arXiv Detail & Related papers (2021-04-17T04:03:25Z) - PointShuffleNet: Learning Non-Euclidean Features with Homotopy
Equivalence and Mutual Information [9.920649045126188]
We propose a novel point cloud analysis neural network called PointShuffleNet (PSN), which shows great promise in point cloud classification and segmentation.
Our PSN achieves state-of-the-art results on ModelNet40, ShapeNet and S3DIS with high efficiency.
arXiv Detail & Related papers (2021-03-31T03:01:16Z) - Deep Archimedean Copulas [98.96141706464425]
ACNet is a novel differentiable neural network architecture that enforces structural properties.
We show that ACNet is able to both approximate common Archimedean Copulas and generate new copulas which may provide better fits to data.
arXiv Detail & Related papers (2020-12-05T22:58:37Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.