SrvfNet: A Generative Network for Unsupervised Multiple Diffeomorphic
Shape Alignment
- URL: http://arxiv.org/abs/2104.13449v1
- Date: Tue, 27 Apr 2021 19:49:46 GMT
- Title: SrvfNet: A Generative Network for Unsupervised Multiple Diffeomorphic
Shape Alignment
- Authors: Elvis Nunez, Andrew Lizarraga, and Shantanu H. Joshi
- Abstract summary: SrvfNet is a generative deep learning framework for the joint multiple alignment of large collections of functional data.
Our proposed framework is fully unsupervised and is capable of aligning to a predefined template as well as jointly predicting an optimal template from data.
We demonstrate the strength of our framework by validating it on synthetic data as well as diffusion profiles from magnetic resonance imaging (MRI) data.
- Score: 6.404122934568859
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present SrvfNet, a generative deep learning framework for the joint
multiple alignment of large collections of functional data comprising
square-root velocity functions (SRVF) to their templates. Our proposed
framework is fully unsupervised and is capable of aligning to a predefined
template as well as jointly predicting an optimal template from data while
simultaneously achieving alignment. Our network is constructed as a generative
encoder-decoder architecture comprising fully-connected layers capable of
producing a distribution space of the warping functions. We demonstrate the
strength of our framework by validating it on synthetic data as well as
diffusion profiles from magnetic resonance imaging (MRI) data.
Related papers
- Steering Masked Discrete Diffusion Models via Discrete Denoising Posterior Prediction [88.65168366064061]
We introduce Discrete Denoising Posterior Prediction (DDPP), a novel framework that casts the task of steering pre-trained MDMs as a problem of probabilistic inference.
Our framework leads to a family of three novel objectives that are all simulation-free, and thus scalable.
We substantiate our designs via wet-lab validation, where we observe transient expression of reward-optimized protein sequences.
arXiv Detail & Related papers (2024-10-10T17:18:30Z) - FissionVAE: Federated Non-IID Image Generation with Latent Space and Decoder Decomposition [9.059664504170287]
Federated learning enables decentralized clients to collaboratively learn a shared model while keeping all the training data local.
We introduce a novel approach, FissionVAE, which decomposes the latent space and constructs decoder branches tailored to individual client groups.
To evaluate our approach, we assemble two composite datasets: the first combines MNIST and FashionMNIST; the second comprises RGB datasets of cartoon and human faces, wild animals, marine vessels, and remote sensing images of Earth.
arXiv Detail & Related papers (2024-08-30T08:22:30Z) - A Framework for Fine-Tuning LLMs using Heterogeneous Feedback [69.51729152929413]
We present a framework for fine-tuning large language models (LLMs) using heterogeneous feedback.
First, we combine the heterogeneous feedback data into a single supervision format, compatible with methods like SFT and RLHF.
Next, given this unified feedback dataset, we extract a high-quality and diverse subset to obtain performance increases.
arXiv Detail & Related papers (2024-08-05T23:20:32Z) - ReFiNe: Recursive Field Networks for Cross-modal Multi-scene Representation [37.24514001359966]
We show how to encode multiple shapes represented as continuous neural fields with a higher degree of precision than previously possible.
We demonstrate state-of-the-art multi-scene reconstruction and compression results with a single network per dataset.
arXiv Detail & Related papers (2024-06-06T17:55:34Z) - T1: Scaling Diffusion Probabilistic Fields to High-Resolution on Unified
Visual Modalities [69.16656086708291]
Diffusion Probabilistic Field (DPF) models the distribution of continuous functions defined over metric spaces.
We propose a new model comprising of a view-wise sampling algorithm to focus on local structure learning.
The model can be scaled to generate high-resolution data while unifying multiple modalities.
arXiv Detail & Related papers (2023-05-24T03:32:03Z) - Sparsity-guided Network Design for Frame Interpolation [39.828644638174225]
We present a compression-driven network design for frame-based algorithms.
We leverage model pruning through sparsity-inducing optimization to greatly reduce the model size.
We achieve a considerable performance gain with a quarter of the size of the original AdaCoF.
arXiv Detail & Related papers (2022-09-09T23:13:25Z) - Bayesian Structure Learning with Generative Flow Networks [85.84396514570373]
In Bayesian structure learning, we are interested in inferring a distribution over the directed acyclic graph (DAG) from data.
Recently, a class of probabilistic models, called Generative Flow Networks (GFlowNets), have been introduced as a general framework for generative modeling.
We show that our approach, called DAG-GFlowNet, provides an accurate approximation of the posterior over DAGs.
arXiv Detail & Related papers (2022-02-28T15:53:10Z) - Latent Code-Based Fusion: A Volterra Neural Network Approach [21.25021807184103]
We propose a deep structure encoder using the recently introduced Volterra Neural Networks (VNNs)
We show that the proposed approach demonstrates a much-improved sample complexity over CNN-based auto-encoder with a superb robust classification performance.
arXiv Detail & Related papers (2021-04-10T18:29:01Z) - Deep Autoencoding Topic Model with Scalable Hybrid Bayesian Inference [55.35176938713946]
We develop deep autoencoding topic model (DATM) that uses a hierarchy of gamma distributions to construct its multi-stochastic-layer generative network.
We propose a Weibull upward-downward variational encoder that deterministically propagates information upward via a deep neural network, followed by a downward generative model.
The efficacy and scalability of our models are demonstrated on both unsupervised and supervised learning tasks on big corpora.
arXiv Detail & Related papers (2020-06-15T22:22:56Z) - Ensemble Model with Batch Spectral Regularization and Data Blending for
Cross-Domain Few-Shot Learning with Unlabeled Data [75.94147344921355]
We build a multi-branch ensemble framework by using diverse feature transformation matrices.
We propose a data blending method to exploit the unlabeled data and augment the sparse support set in the target domain.
arXiv Detail & Related papers (2020-06-08T02:27:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.