Functional Space Analysis of Local GAN Convergence
- URL: http://arxiv.org/abs/2102.04448v1
- Date: Mon, 8 Feb 2021 18:59:46 GMT
- Title: Functional Space Analysis of Local GAN Convergence
- Authors: Valentin Khrulkov, Artem Babenko, Ivan Oseledets
- Abstract summary: We study the local dynamics of adversarial training in the general functional space.
We show how it can be represented as a system of partial differential equations.
Our perspective reveals several insights on the practical tricks commonly used to stabilize GANs.
- Score: 26.985600125290908
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent work demonstrated the benefits of studying continuous-time dynamics
governing the GAN training. However, this dynamics is analyzed in the model
parameter space, which results in finite-dimensional dynamical systems. We
propose a novel perspective where we study the local dynamics of adversarial
training in the general functional space and show how it can be represented as
a system of partial differential equations. Thus, the convergence properties
can be inferred from the eigenvalues of the resulting differential operator. We
show that these eigenvalues can be efficiently estimated from the target
dataset before training. Our perspective reveals several insights on the
practical tricks commonly used to stabilize GANs, such as gradient penalty,
data augmentation, and advanced integration schemes. As an immediate practical
benefit, we demonstrate how one can a priori select an optimal data
augmentation strategy for a particular generation task.
Related papers
- Learning dynamical systems from data: Gradient-based dictionary optimization [0.8643517734716606]
We present a novel gradient descent-based optimization framework for learning suitable basis functions from data.
We show how it can be used in combination with EDMD, SINDy, and PDE-FIND.
arXiv Detail & Related papers (2024-11-07T15:15:27Z) - Exploring the Precise Dynamics of Single-Layer GAN Models: Leveraging Multi-Feature Discriminators for High-Dimensional Subspace Learning [0.0]
We study the training dynamics of a single-layer GAN model from the perspective of subspace learning.
By bridging our analysis to the realm of subspace learning, we systematically compare the efficacy of GAN-based methods against conventional approaches.
arXiv Detail & Related papers (2024-11-01T10:21:12Z) - Sparse identification of quasipotentials via a combined data-driven method [4.599618895656792]
We leverage on machine learning via the combination of two data-driven techniques, namely a neural network and a sparse regression algorithm, to obtain symbolic expressions of quasipotential functions.
We show that our approach discovers a parsimonious quasipotential equation for an archetypal model with a known exact quasipotential and for the dynamics of a nanomechanical resonator.
arXiv Detail & Related papers (2024-07-06T11:27:52Z) - Learning invariant representations of time-homogeneous stochastic dynamical systems [27.127773672738535]
We study the problem of learning a representation of the state that faithfully captures its dynamics.
This is instrumental to learning the transfer operator or the generator of the system.
We show that the search for a good representation can be cast as an optimization problem over neural networks.
arXiv Detail & Related papers (2023-07-19T11:32:24Z) - Inverse Dynamics Pretraining Learns Good Representations for Multitask
Imitation [66.86987509942607]
We evaluate how such a paradigm should be done in imitation learning.
We consider a setting where the pretraining corpus consists of multitask demonstrations.
We argue that inverse dynamics modeling is well-suited to this setting.
arXiv Detail & Related papers (2023-05-26T14:40:46Z) - Anamnesic Neural Differential Equations with Orthogonal Polynomial
Projections [6.345523830122166]
We propose PolyODE, a formulation that enforces long-range memory and preserves a global representation of the underlying dynamical system.
Our construction is backed by favourable theoretical guarantees and we demonstrate that it outperforms previous works in the reconstruction of past and future data.
arXiv Detail & Related papers (2023-03-03T10:49:09Z) - Semi-supervised Learning of Partial Differential Operators and Dynamical
Flows [68.77595310155365]
We present a novel method that combines a hyper-network solver with a Fourier Neural Operator architecture.
We test our method on various time evolution PDEs, including nonlinear fluid flows in one, two, and three spatial dimensions.
The results show that the new method improves the learning accuracy at the time point of supervision point, and is able to interpolate and the solutions to any intermediate time.
arXiv Detail & Related papers (2022-07-28T19:59:14Z) - Reinforcement Learning from Partial Observation: Linear Function Approximation with Provable Sample Efficiency [111.83670279016599]
We study reinforcement learning for partially observed decision processes (POMDPs) with infinite observation and state spaces.
We make the first attempt at partial observability and function approximation for a class of POMDPs with a linear structure.
arXiv Detail & Related papers (2022-04-20T21:15:38Z) - Convex Analysis of the Mean Field Langevin Dynamics [49.66486092259375]
convergence rate analysis of the mean field Langevin dynamics is presented.
$p_q$ associated with the dynamics allows us to develop a convergence theory parallel to classical results in convex optimization.
arXiv Detail & Related papers (2022-01-25T17:13:56Z) - A New Representation of Successor Features for Transfer across
Dissimilar Environments [60.813074750879615]
Many real-world RL problems require transfer among environments with different dynamics.
We propose an approach based on successor features in which we model successor feature functions with Gaussian Processes.
Our theoretical analysis proves the convergence of this approach as well as the bounded error on modelling successor feature functions.
arXiv Detail & Related papers (2021-07-18T12:37:05Z) - On dissipative symplectic integration with applications to
gradient-based optimization [77.34726150561087]
We propose a geometric framework in which discretizations can be realized systematically.
We show that a generalization of symplectic to nonconservative and in particular dissipative Hamiltonian systems is able to preserve rates of convergence up to a controlled error.
arXiv Detail & Related papers (2020-04-15T00:36:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.