Representative Feature Extraction During Diffusion Process for Sketch
Extraction with One Example
- URL: http://arxiv.org/abs/2401.04362v1
- Date: Tue, 9 Jan 2024 05:22:15 GMT
- Title: Representative Feature Extraction During Diffusion Process for Sketch
Extraction with One Example
- Authors: Kwan Yun, Youngseo Kim, Kwanggyoon Seo, Chang Wook Seo, Junyong Noh
- Abstract summary: We introduce DiffSketch, a method for generating a variety of stylized sketches from images.
Our approach focuses on selecting representative features from the rich semantics of deep features within a pretrained diffusion model.
- Score: 6.520083224801834
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce DiffSketch, a method for generating a variety of stylized
sketches from images. Our approach focuses on selecting representative features
from the rich semantics of deep features within a pretrained diffusion model.
This novel sketch generation method can be trained with one manual drawing.
Furthermore, efficient sketch extraction is ensured by distilling a trained
generator into a streamlined extractor. We select denoising diffusion features
through analysis and integrate these selected features with VAE features to
produce sketches. Additionally, we propose a sampling scheme for training
models using a conditional generative approach. Through a series of
comparisons, we verify that distilled DiffSketch not only outperforms existing
state-of-the-art sketch extraction methods but also surpasses diffusion-based
stylization methods in the task of extracting sketches.
Related papers
- SwiftSketch: A Diffusion Model for Image-to-Vector Sketch Generation [57.47730473674261]
We introduce SwiftSketch, a model for image-conditioned vector sketch generation that can produce high-quality sketches in less than a second.
SwiftSketch operates by progressively denoising stroke control points sampled from a Gaussian distribution.
ControlSketch is a method that enhances SDS-based techniques by incorporating precise spatial control through a depth-aware ControlNet.
arXiv Detail & Related papers (2025-02-12T18:57:12Z) - Arbitrary-steps Image Super-resolution via Diffusion Inversion [68.78628844966019]
This study presents a new image super-resolution (SR) technique based on diffusion inversion, aiming at harnessing the rich image priors encapsulated in large pre-trained diffusion models to improve SR performance.
We design a Partial noise Prediction strategy to construct an intermediate state of the diffusion model, which serves as the starting sampling point.
Once trained, this noise predictor can be used to initialize the sampling process partially along the diffusion trajectory, generating the desirable high-resolution result.
arXiv Detail & Related papers (2024-12-12T07:24:13Z) - Diffusing Differentiable Representations [60.72992910766525]
We introduce a novel, training-free method for sampling differentiable representations (diffreps) using pretrained diffusion models.
We identify an implicit constraint on the samples induced by the diffrep and demonstrate that addressing this constraint significantly improves the consistency and detail of the generated objects.
arXiv Detail & Related papers (2024-12-09T20:42:58Z) - Training-free Diffusion Model Alignment with Sampling Demons [15.400553977713914]
We propose an optimization approach, dubbed Demon, to guide the denoising process at inference time without backpropagation through reward functions or model retraining.
Our approach works by controlling noise distribution in denoising steps to concentrate density on regions corresponding to high rewards through optimization.
To the best of our knowledge, the proposed approach is the first inference-time, backpropagation-free preference alignment method for diffusion models.
arXiv Detail & Related papers (2024-10-08T07:33:49Z) - Semi-supervised reference-based sketch extraction using a contrastive learning framework [6.20476217797034]
We propose a novel multi-modal sketch extraction method that can imitate the style of a given reference sketch with unpaired data training.
Our method outperforms state-of-the-art sketch extraction methods and unpaired image translation methods in both quantitative and qualitative evaluations.
arXiv Detail & Related papers (2024-07-19T04:51:34Z) - Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion [61.03681839276652]
Diffusion Forcing is a new training paradigm where a diffusion model is trained to denoise a set of tokens with independent per-token noise levels.
We apply Diffusion Forcing to sequence generative modeling by training a causal next-token prediction model to generate one or several future tokens.
arXiv Detail & Related papers (2024-07-01T15:43:25Z) - Plug-and-Play Diffusion Distillation [14.359953671470242]
We propose a new distillation approach for guided diffusion models.
An external lightweight guide model is trained while the original text-to-image model remains frozen.
We show that our method reduces the inference of classifier-free guided latent-space diffusion models by almost half.
arXiv Detail & Related papers (2024-06-04T04:22:47Z) - DiffSketcher: Text Guided Vector Sketch Synthesis through Latent
Diffusion Models [33.6615688030998]
DiffSketcher is an innovative algorithm that creates textitvectorized free-hand sketches using natural language input.
Our experiments show that DiffSketcher achieves greater quality than prior work.
arXiv Detail & Related papers (2023-06-26T13:30:38Z) - DWDN: Deep Wiener Deconvolution Network for Non-Blind Image Deblurring [66.91879314310842]
We propose an explicit deconvolution process in a feature space by integrating a classical Wiener deconvolution framework with learned deep features.
A multi-scale cascaded feature refinement module then predicts the deblurred image from the deconvolved deep features.
We show that the proposed deep Wiener deconvolution network facilitates deblurred results with visibly fewer artifacts and quantitatively outperforms state-of-the-art non-blind image deblurring methods by a wide margin.
arXiv Detail & Related papers (2021-03-18T00:38:11Z) - Distributed Sketching Methods for Privacy Preserving Regression [54.51566432934556]
We leverage randomized sketches for reducing the problem dimensions as well as preserving privacy and improving straggler resilience in asynchronous distributed systems.
We derive novel approximation guarantees for classical sketching methods and analyze the accuracy of parameter averaging for distributed sketches.
We illustrate the performance of distributed sketches in a serverless computing platform with large scale experiments.
arXiv Detail & Related papers (2020-02-16T08:35:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.