Gradient-free Decoder Inversion in Latent Diffusion Models
- URL: http://arxiv.org/abs/2409.18442v1
- Date: Fri, 27 Sep 2024 04:38:14 GMT
- Title: Gradient-free Decoder Inversion in Latent Diffusion Models
- Authors: Seongmin Hong, Suh Yoon Jeon, Kyeonghyun Lee, Ernest K. Ryu, Se Young Chun,
- Abstract summary: In latent diffusion models (LDMs), denoising diffusion process efficiently takes place on latent space whose dimension is lower than that of pixel space.
We propose an efficient gradient-free decoder inversion for LDMs, which can be applied to diverse latent models.
- Score: 18.493960162113712
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In latent diffusion models (LDMs), denoising diffusion process efficiently takes place on latent space whose dimension is lower than that of pixel space. Decoder is typically used to transform the representation in latent space to that in pixel space. While a decoder is assumed to have an encoder as an accurate inverse, exact encoder-decoder pair rarely exists in practice even though applications often require precise inversion of decoder. Prior works for decoder inversion in LDMs employed gradient descent inspired by inversions of generative adversarial networks. However, gradient-based methods require larger GPU memory and longer computation time for larger latent space. For example, recent video LDMs can generate more than 16 frames, but GPUs with 24 GB memory can only perform gradient-based decoder inversion for 4 frames. Here, we propose an efficient gradient-free decoder inversion for LDMs, which can be applied to diverse latent models. Theoretical convergence property of our proposed inversion has been investigated not only for the forward step method, but also for the inertial Krasnoselskii-Mann (KM) iterations under mild assumption on cocoercivity that is satisfied by recent LDMs. Our proposed gradient-free method with Adam optimizer and learning rate scheduling significantly reduced computation time and memory usage over prior gradient-based methods and enabled efficient computation in applications such as noise-space watermarking while achieving comparable error levels.
Related papers
- Explicit and data-Efficient Encoding via Gradient Flow [13.424502866278822]
We introduce a decoder-only method using gradient flow to directly encode data into the latent space.
We train the decoder via the adjoint method and show that costly integrals can be avoided with minimal accuracy loss.
This work paves the way for integrating machine learning into scientific, where precise and efficient encoding is critical.
arXiv Detail & Related papers (2024-12-01T15:54:50Z) - Moonwalk: Inverse-Forward Differentiation [4.425689868461635]
Forward-mode gradient computation is an alternative in invertible networks.
Moonwalk is the first forward-based method to compute true gradients in invertible networks in computation time comparable to backpropagation.
arXiv Detail & Related papers (2024-02-22T01:33:31Z) - Faster Diffusion: Rethinking the Role of the Encoder for Diffusion Model Inference [95.42299246592756]
We study the UNet encoder and empirically analyze the encoder features.
We find that encoder features change minimally, whereas the decoder features exhibit substantial variations across different time-steps.
We validate our approach on other tasks: text-to-video, personalized generation and reference-guided generation.
arXiv Detail & Related papers (2023-12-15T08:46:43Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - Winner-Take-All Column Row Sampling for Memory Efficient Adaptation of Language Model [89.8764435351222]
We propose a new family of unbiased estimators called WTA-CRS, for matrix production with reduced variance.
Our work provides both theoretical and experimental evidence that, in the context of tuning transformers, our proposed estimators exhibit lower variance compared to existing ones.
arXiv Detail & Related papers (2023-05-24T15:52:08Z) - Reducing Redundancy in the Bottleneck Representation of the Autoencoders [98.78384185493624]
Autoencoders are a type of unsupervised neural networks, which can be used to solve various tasks.
We propose a scheme to explicitly penalize feature redundancies in the bottleneck representation.
We tested our approach across different tasks: dimensionality reduction using three different dataset, image compression using the MNIST dataset, and image denoising using fashion MNIST.
arXiv Detail & Related papers (2022-02-09T18:48:02Z) - FastSGD: A Fast Compressed SGD Framework for Distributed Machine
Learning [16.542846343774357]
Gradient Descent (SGD) is arguably the workhorse algorithm of distributed Machine Learning (ML)
FastSGD represents the gradients as key-value pairs, and compresses both the gradient keys and values in linear time complexity.
FastSGD achieves the compression ratio up to 4 orders of magnitude, and accelerates the convergence time up to 8x, compared with state-of-the-art methods.
arXiv Detail & Related papers (2021-12-08T13:56:24Z) - Small Lesion Segmentation in Brain MRIs with Subpixel Embedding [105.1223735549524]
We present a method to segment MRI scans of the human brain into ischemic stroke lesion and normal tissues.
We propose a neural network architecture in the form of a standard encoder-decoder where predictions are guided by a spatial expansion embedding network.
arXiv Detail & Related papers (2021-09-18T00:21:17Z) - Dynamic Neural Representational Decoders for High-Resolution Semantic
Segmentation [98.05643473345474]
We propose a novel decoder, termed dynamic neural representational decoder (NRD)
As each location on the encoder's output corresponds to a local patch of the semantic labels, in this work, we represent these local patches of labels with compact neural networks.
This neural representation enables our decoder to leverage the smoothness prior in the semantic label space, and thus makes our decoder more efficient.
arXiv Detail & Related papers (2021-07-30T04:50:56Z) - Gradient flow encoding with distance optimization adaptive step size [10.973034520723957]
We investigate a decoder-only method that uses gradient flow to encode data samples in the latent space.
In our experiments, GFE showed a much higher data-efficiency than the autoencoding model, which can be crucial for data scarce applications.
arXiv Detail & Related papers (2021-05-11T13:38:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.