Dynamic Flows on Curved Space Generated by Labeled Data
- URL: http://arxiv.org/abs/2302.00061v1
- Date: Tue, 31 Jan 2023 19:53:01 GMT
- Title: Dynamic Flows on Curved Space Generated by Labeled Data
- Authors: Xinru Hua, Truyen Nguyen, Tam Le, Jose Blanchet, Viet Anh Nguyen
- Abstract summary: We propose a gradient flow method to generate new samples close to a dataset of interest.
We show that our method can improve the accuracy of classification models in transfer learning settings.
- Score: 17.621847430986854
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The scarcity of labeled data is a long-standing challenge for many machine
learning tasks. We propose our gradient flow method to leverage the existing
dataset (i.e., source) to generate new samples that are close to the dataset of
interest (i.e., target). We lift both datasets to the space of probability
distributions on the feature-Gaussian manifold, and then develop a gradient
flow method that minimizes the maximum mean discrepancy loss. To perform the
gradient flow of distributions on the curved feature-Gaussian space, we unravel
the Riemannian structure of the space and compute explicitly the Riemannian
gradient of the loss function induced by the optimal transport metric. For
practical applications, we also propose a discretized flow, and provide
conditional results guaranteeing the global convergence of the flow to the
optimum. We illustrate the results of our proposed gradient flow method on
several real-world datasets and show our method can improve the accuracy of
classification models in transfer learning settings.
Related papers
- A Historical Trajectory Assisted Optimization Method for Zeroth-Order Federated Learning [24.111048817721592]
Federated learning heavily relies on distributed gradient descent techniques.
In the situation where gradient information is not available, gradients need to be estimated from zeroth-order information.
We propose a non-isotropic sampling method to improve the gradient estimation procedure.
arXiv Detail & Related papers (2024-09-24T10:36:40Z) - Flow-based Distributionally Robust Optimization [23.232731771848883]
We present a framework, called $textttFlowDRO$, for solving flow-based distributionally robust optimization (DRO) problems with Wasserstein uncertainty sets.
We aim to find continuous worst-case distribution (also called the Least Favorable Distribution, LFD) and sample from it.
We demonstrate its usage in adversarial learning, distributionally robust hypothesis testing, and a new mechanism for data-driven distribution perturbation differential privacy.
arXiv Detail & Related papers (2023-10-30T03:53:31Z) - Diffusion Generative Flow Samplers: Improving learning signals through
partial trajectory optimization [87.21285093582446]
Diffusion Generative Flow Samplers (DGFS) is a sampling-based framework where the learning process can be tractably broken down into short partial trajectory segments.
Our method takes inspiration from the theory developed for generative flow networks (GFlowNets)
arXiv Detail & Related papers (2023-10-04T09:39:05Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - ManiFlow: Implicitly Representing Manifolds with Normalizing Flows [145.9820993054072]
Normalizing Flows (NFs) are flexible explicit generative models that have been shown to accurately model complex real-world data distributions.
We propose an optimization objective that recovers the most likely point on the manifold given a sample from the perturbed distribution.
Finally, we focus on 3D point clouds for which we utilize the explicit nature of NFs, i.e. surface normals extracted from the gradient of the log-likelihood and the log-likelihood itself.
arXiv Detail & Related papers (2022-08-18T16:07:59Z) - VQ-Flows: Vector Quantized Local Normalizing Flows [2.7998963147546148]
We introduce a novel statistical framework for learning a mixture of local normalizing flows as "chart maps" over a data manifold.
Our framework augments the expressivity of recent approaches while preserving the signature property of normalizing flows, that they admit exact density evaluation.
arXiv Detail & Related papers (2022-03-22T09:22:18Z) - Invariance Learning in Deep Neural Networks with Differentiable Laplace
Approximations [76.82124752950148]
We develop a convenient gradient-based method for selecting the data augmentation.
We use a differentiable Kronecker-factored Laplace approximation to the marginal likelihood as our objective.
arXiv Detail & Related papers (2022-02-22T02:51:11Z) - Semi-Supervised Learning with Normalizing Flows [54.376602201489995]
FlowGMM is an end-to-end approach to generative semi supervised learning with normalizing flows.
We show promising results on a wide range of applications, including AG-News and Yahoo Answers text data.
arXiv Detail & Related papers (2019-12-30T17:36:33Z) - A Near-Optimal Gradient Flow for Learning Neural Energy-Based Models [93.24030378630175]
We propose a novel numerical scheme to optimize the gradient flows for learning energy-based models (EBMs)
We derive a second-order Wasserstein gradient flow of the global relative entropy from Fokker-Planck equation.
Compared with existing schemes, Wasserstein gradient flow is a smoother and near-optimal numerical scheme to approximate real data densities.
arXiv Detail & Related papers (2019-10-31T02:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.