Fast and Scalable Optimal Transport for Brain Tractograms
- URL: http://arxiv.org/abs/2107.02010v1
- Date: Mon, 5 Jul 2021 13:28:41 GMT
- Title: Fast and Scalable Optimal Transport for Brain Tractograms
- Authors: Jean Feydy and Pierre Roussillon and Alain Trouv\'e and Pietro Gori
- Abstract summary: We present a new multiscale algorithm for solving regularized Optimal Transport problems on a linear memory footprint.
We show the effectiveness of this approach on brain tractograms modeled either as bundles of fibers or as track density maps.
- Score: 4.610968512889579
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a new multiscale algorithm for solving regularized Optimal
Transport problems on the GPU, with a linear memory footprint. Relying on
Sinkhorn divergences which are convex, smooth and positive definite loss
functions, this method enables the computation of transport plans between
millions of points in a matter of minutes. We show the effectiveness of this
approach on brain tractograms modeled either as bundles of fibers or as track
density maps. We use the resulting smooth assignments to perform label transfer
for atlas-based segmentation of fiber tractograms. The parameters -- blur and
reach -- of our method are meaningful, defining the minimum and maximum
distance at which two fibers are compared with each other. They can be set
according to anatomical knowledge. Furthermore, we also propose to estimate a
probabilistic atlas of a population of track density maps as a Wasserstein
barycenter. Our CUDA implementation is endowed with a user-friendly PyTorch
interface, freely available on the PyPi repository (pip install geomloss) and
at www.kernel-operations.io/geomloss.
Related papers
- Efficient Neural Network Approaches for Conditional Optimal Transport with Applications in Bayesian Inference [1.740133468405535]
We present two neural network approaches that approximate the solutions of static and conditional optimal transport (COT) problems.
We demonstrate both algorithms, comparing them with competing state-the-art approaches, using benchmark datasets and simulation-based inverse problems.
arXiv Detail & Related papers (2023-10-25T20:20:09Z) - Learning the Geodesic Embedding with Graph Neural Networks [22.49236293942187]
We present GeGnn, a learning-based method for computing the approximate geodesic distance between two arbitrary points on discrete polyhedra surfaces.
Our key idea is to train a graph neural network to embed an input mesh into a high-dimensional embedding space.
We verify the efficiency and effectiveness of our method on ShapeNet and demonstrate that our method is faster than existing methods by orders of magnitude.
arXiv Detail & Related papers (2023-09-11T16:54:34Z) - Semi-supervised Learning of Pushforwards For Domain Translation &
Adaptation [3.800498098285222]
Given two probability densities on related data spaces, we seek a map pushing one density to the other.
For maps to have utility in a broad application space, the map must be available to apply on out-of-sample data points.
We introduce a novel pushforward map learning algorithm that utilizes normalizing flows to parameterize the map.
arXiv Detail & Related papers (2023-04-18T00:35:32Z) - Linearized Wasserstein dimensionality reduction with approximation
guarantees [65.16758672591365]
LOT Wassmap is a computationally feasible algorithm to uncover low-dimensional structures in the Wasserstein space.
We show that LOT Wassmap attains correct embeddings and that the quality improves with increased sample size.
We also show how LOT Wassmap significantly reduces the computational cost when compared to algorithms that depend on pairwise distance computations.
arXiv Detail & Related papers (2023-02-14T22:12:16Z) - Efficient Graph Field Integrators Meet Point Clouds [59.27295475120132]
We present two new classes of algorithms for efficient field integration on graphs encoding point clouds.
The first class, SeparatorFactorization(SF), leverages the bounded genus of point cloud mesh graphs, while the second class, RFDiffusion(RFD), uses popular epsilon-nearest-neighbor graph representations for point clouds.
arXiv Detail & Related papers (2023-02-02T08:33:36Z) - Geodesic Sinkhorn for Fast and Accurate Optimal Transport on Manifolds [53.110934987571355]
We propose Geodesic Sinkhorn -- based on a heat kernel on a manifold graph.
We apply our method to the computation of barycenters of several distributions of high dimensional single cell data from patient samples undergoing chemotherapy.
arXiv Detail & Related papers (2022-11-02T00:51:35Z) - GraphWalks: Efficient Shape Agnostic Geodesic Shortest Path Estimation [93.60478281489243]
We propose a learnable network to approximate geodesic paths on 3D surfaces.
The proposed method provides efficient approximations of the shortest paths and geodesic distances estimations.
arXiv Detail & Related papers (2022-05-30T16:22:53Z) - Near-optimal estimation of smooth transport maps with kernel
sums-of-squares [81.02564078640275]
Under smoothness conditions, the squared Wasserstein distance between two distributions could be efficiently computed with appealing statistical error upper bounds.
The object of interest for applications such as generative modeling is the underlying optimal transport map.
We propose the first tractable algorithm for which the statistical $L2$ error on the maps nearly matches the existing minimax lower-bounds for smooth map estimation.
arXiv Detail & Related papers (2021-12-03T13:45:36Z) - FLOT: Scene Flow on Point Clouds Guided by Optimal Transport [82.86743909483312]
We propose and study a method called FLOT that estimates scene flow on point clouds.
Inspired by recent works on graph matching, we build a method to find these correspondences by borrowing tools from optimal transport.
Our main finding is that FLOT can perform as well as the best existing methods on synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-22T00:15:30Z) - Sliced Iterative Normalizing Flows [7.6146285961466]
We develop an iterative (greedy) deep learning (DL) algorithm which is able to transform an arbitrary probability distribution function (PDF) into the target PDF.
As special cases of this algorithm, we introduce two sliced iterative Normalizing Flow (SINF) models, which map from the data to the latent space (GIS) and vice versa.
arXiv Detail & Related papers (2020-07-01T18:00:04Z) - A Study of Performance of Optimal Transport [16.847501106437534]
We show that network simplex and augmenting path based algorithms can consistently outperform numerical matrix-scaling based methods.
We present a new algorithm that improves upon the classical Kuhn-Munkres algorithm.
arXiv Detail & Related papers (2020-05-03T20:37:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.