LayerD: Decomposing Raster Graphic Designs into Layers
- URL: http://arxiv.org/abs/2509.25134v1
- Date: Mon, 29 Sep 2025 17:50:12 GMT
- Title: LayerD: Decomposing Raster Graphic Designs into Layers
- Authors: Tomoyuki Suzuki, Kang-Jun Liu, Naoto Inoue, Kota Yamaguchi,
- Abstract summary: LayerD is a method to decompose graphic designs into layers for re-editable creative workflow.<n>We propose a simple yet effective refinement approach taking advantage of the assumption that layers often exhibit uniform appearance.<n>In experiments, we show that LayerD successfully achieves high-quality decomposition and outperforms baselines.
- Score: 15.294433619347082
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Designers craft and edit graphic designs in a layer representation, but layer-based editing becomes impossible once composited into a raster image. In this work, we propose LayerD, a method to decompose raster graphic designs into layers for re-editable creative workflow. LayerD addresses the decomposition task by iteratively extracting unoccluded foreground layers. We propose a simple yet effective refinement approach taking advantage of the assumption that layers often exhibit uniform appearance in graphic designs. As decomposition is ill-posed and the ground-truth layer structure may not be reliable, we develop a quality metric that addresses the difficulty. In experiments, we show that LayerD successfully achieves high-quality decomposition and outperforms baselines. We also demonstrate the use of LayerD with state-of-the-art image generators and layer-based editing.
Related papers
- Controllable Layered Image Generation for Real-World Editing [49.81321254149423]
LASAGNA is a novel, unified framework that generates an image jointly with its composing layers.<n>We introduce LASAGNA-48K, a new dataset composed of clean backgrounds and RGBA foregrounds with physically grounded visual effects.<n>We demonstrate that LASAGNA excels in generating highly consistent and coherent results across multiple image layers simultaneously.
arXiv Detail & Related papers (2026-01-21T22:29:33Z) - Qwen-Image-Layered: Towards Inherent Editability via Layer Decomposition [73.43121650616804]
We propose textbfQwen-Image-Layered, an end-to-end diffusion model that decomposes a single RGB image into multiple semantically disentangled RGBA layers.<n>Our method significantly surpasses existing approaches in decomposition quality and establishes a new paradigm for consistent image editing.
arXiv Detail & Related papers (2025-12-17T17:12:42Z) - From Inpainting to Layer Decomposition: Repurposing Generative Inpainting Models for Image Layer Decomposition [16.7393689710179]
layered representation enables independent editing of elements, offering greater flexibility for content creation.<n>We observe a strong connection between layer decomposition and in/outpainting tasks, and propose adapting a diffusion-based inpainting model for layer decomposition using lightweight finetuning.<n>To further preserve detail in the latent space, we introduce a novel multi-modal context fusion module with linear attention complexity.
arXiv Detail & Related papers (2025-11-26T02:50:07Z) - Illustrator's Depth: Monocular Layer Index Prediction for Image Decomposition [55.8308608221966]
We introduce Illustrator's Depth, a novel definition of depth that addresses a key challenge in digital content creation: decomposing flat images into editable, ordered layers.<n>Inspired by an artist's compositional process, illustrator's depth infers a layer index to each pixel, forming an interpretable image decomposition.
arXiv Detail & Related papers (2025-11-21T17:56:43Z) - Rethinking Layered Graphic Design Generation with a Top-Down Approach [76.33538798060326]
Graphic design is crucial for conveying ideas and messages. Designers usually organize their work into objects, backgrounds, and vectorized text layers to simplify editing.<n>With the rise of GenAI methods, an endless supply of high-quality graphic designs in pixel format has become more accessible.<n>Despite this, non-layered designs still inspire human designers, influencing their choices in layouts and text styles, ultimately guiding the creation of layered designs.<n>Motivated by this observation, we propose Accordion, a graphic design generation framework taking the first attempt to convert AI-generated designs into editable layered designs.
arXiv Detail & Related papers (2025-07-08T02:26:08Z) - LayerPeeler: Autoregressive Peeling for Layer-wise Image Vectorization [14.917583676464266]
We introduce LayerPeeler, a novel layer-wise image vectorization approach.<n>By identifying and removing the topmost non-occluded layers, we generate vector graphics with complete paths and coherent layer structures.<n>Our method leverages vision-language models to construct a layer graph that captures relationships among elements.
arXiv Detail & Related papers (2025-05-29T17:58:03Z) - DiffDecompose: Layer-Wise Decomposition of Alpha-Composited Images via Diffusion Transformers [85.1185656296496]
We present DiffDecompose, a diffusion Transformer-based framework that learns the posterior over possible layer decompositions conditioned on the input image.<n>The code and dataset will be available upon paper acceptance.
arXiv Detail & Related papers (2025-05-24T16:08:04Z) - LayeringDiff: Layered Image Synthesis via Generation, then Disassembly with Generative Knowledge [14.481577976493236]
LayeringDiff is a novel pipeline for the synthesis of layered images.<n>By extracting layers from a composite image, rather than generating them from scratch, LayeringDiff bypasses the need for large-scale training.<n>For effective layer decomposition, we adapt a large-scale pretrained generative prior to estimate foreground and background layers.
arXiv Detail & Related papers (2025-01-02T11:18:25Z) - Generative Image Layer Decomposition with Visual Effects [49.75021036203426]
LayerDecomp is a generative framework for image layer decomposition.<n>It produces clean backgrounds and high-quality transparent foregrounds with faithfully preserved visual effects.<n>Our method achieves superior quality in layer decomposition, outperforming existing approaches in object removal and spatial editing tasks.
arXiv Detail & Related papers (2024-11-26T20:26:49Z) - LayerDiff: Exploring Text-guided Multi-layered Composable Image Synthesis via Layer-Collaborative Diffusion Model [70.14953942532621]
Layer-collaborative diffusion model, named LayerDiff, is designed for text-guided, multi-layered, composable image synthesis.
Our model can generate high-quality multi-layered images with performance comparable to conventional whole-image generation methods.
LayerDiff enables a broader range of controllable generative applications, including layer-specific image editing and style transfer.
arXiv Detail & Related papers (2024-03-18T16:28:28Z) - Text2Layer: Layered Image Generation using Latent Diffusion Model [12.902259486204898]
We propose to generate layered images from a layered image generation perspective.
To achieve layered image generation, we train an autoencoder that is able to reconstruct layered images.
Experimental results show that the proposed method is able to generate high-quality layered images.
arXiv Detail & Related papers (2023-07-19T06:56:07Z) - Spatially-Adaptive Multilayer Selection for GAN Inversion and Editing [57.46189236379433]
We propose a new method to invert and edit complex images in the latent space of GANs, such as StyleGAN2.
Our key idea is to explore inversion with a collection of layers, spatially adapting the inversion process to the difficulty of the image.
arXiv Detail & Related papers (2022-06-16T17:57:49Z) - SLIDE: Single Image 3D Photography with Soft Layering and Depth-aware
Inpainting [54.419266357283966]
Single image 3D photography enables viewers to view a still image from novel viewpoints.
Recent approaches combine monocular depth networks with inpainting networks to achieve compelling results.
We present SLIDE, a modular and unified system for single image 3D photography.
arXiv Detail & Related papers (2021-09-02T16:37:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.