SSMG: Spatial-Semantic Map Guided Diffusion Model for Free-form
Layout-to-Image Generation
- URL: http://arxiv.org/abs/2308.10156v2
- Date: Wed, 13 Mar 2024 12:16:20 GMT
- Title: SSMG: Spatial-Semantic Map Guided Diffusion Model for Free-form
Layout-to-Image Generation
- Authors: Chengyou Jia, Minnan Luo, Zhuohang Dang, Guang Dai, Xiaojun Chang,
Mengmeng Wang, Jingdong Wang
- Abstract summary: We propose a novel Spatial-Semantic Map Guided (SSMG) diffusion model that adopts the feature map, derived from the layout, as guidance.
SSMG achieves superior generation quality with sufficient spatial and semantic controllability compared to previous works.
We also propose the Relation-Sensitive Attention (RSA) and Location-Sensitive Attention (LSA) mechanisms.
- Score: 68.42476385214785
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite significant progress in Text-to-Image (T2I) generative models, even
lengthy and complex text descriptions still struggle to convey detailed
controls. In contrast, Layout-to-Image (L2I) generation, aiming to generate
realistic and complex scene images from user-specified layouts, has risen to
prominence. However, existing methods transform layout information into tokens
or RGB images for conditional control in the generative process, leading to
insufficient spatial and semantic controllability of individual instances. To
address these limitations, we propose a novel Spatial-Semantic Map Guided
(SSMG) diffusion model that adopts the feature map, derived from the layout, as
guidance. Owing to rich spatial and semantic information encapsulated in
well-designed feature maps, SSMG achieves superior generation quality with
sufficient spatial and semantic controllability compared to previous works.
Additionally, we propose the Relation-Sensitive Attention (RSA) and
Location-Sensitive Attention (LSA) mechanisms. The former aims to model the
relationships among multiple objects within scenes while the latter is designed
to heighten the model's sensitivity to the spatial information embedded in the
guidance. Extensive experiments demonstrate that SSMG achieves highly promising
results, setting a new state-of-the-art across a range of metrics encompassing
fidelity, diversity, and controllability.
Related papers
- Boundary Attention Constrained Zero-Shot Layout-To-Image Generation [47.435234391588494]
Recent text-to-image diffusion models excel at generating high-resolution images from text but struggle with precise control over spatial composition and object counting.
We propose a novel zero-shot L2I approach, BACON, which eliminates the need for additional modules or fine-tuning.
We leverage pixel-to-pixel correlations in the self-attention feature maps to align cross-attention maps and combine three loss functions constrained by boundary attention to update latent features.
arXiv Detail & Related papers (2024-11-15T05:44:45Z) - EmerDiff: Emerging Pixel-level Semantic Knowledge in Diffusion Models [52.3015009878545]
We develop an image segmentor capable of generating fine-grained segmentation maps without any additional training.
Our framework identifies semantic correspondences between image pixels and spatial locations of low-dimensional feature maps.
In extensive experiments, the produced segmentation maps are demonstrated to be well delineated and capture detailed parts of the images.
arXiv Detail & Related papers (2024-01-22T07:34:06Z) - Few-shot Image Generation via Information Transfer from the Built
Geodesic Surface [2.617962830559083]
We propose a method called Information Transfer from the Built Geodesic Surface (ITBGS)
With the FAGS module, a pseudo-source domain is created by projecting image features from the training dataset into the Pre-Shape Space.
We demonstrate that the proposed method consistently achieves optimal or comparable results across a diverse range of semantically distinct datasets.
arXiv Detail & Related papers (2024-01-03T13:57:09Z) - Learned representation-guided diffusion models for large-image generation [58.192263311786824]
We introduce a novel approach that trains diffusion models conditioned on embeddings from self-supervised learning (SSL)
Our diffusion models successfully project these features back to high-quality histopathology and remote sensing images.
Augmenting real data by generating variations of real images improves downstream accuracy for patch-level and larger, image-scale classification tasks.
arXiv Detail & Related papers (2023-12-12T14:45:45Z) - R&B: Region and Boundary Aware Zero-shot Grounded Text-to-image
Generation [74.5598315066249]
We probe into zero-shot grounded T2I generation with diffusion models.
We propose a Region and Boundary (R&B) aware cross-attention guidance approach.
arXiv Detail & Related papers (2023-10-13T05:48:42Z) - LAW-Diffusion: Complex Scene Generation by Diffusion with Layouts [107.11267074981905]
We propose a semantically controllable layout-AWare diffusion model, termed LAW-Diffusion.
We show that LAW-Diffusion yields the state-of-the-art generative performance, especially with coherent object relations.
arXiv Detail & Related papers (2023-08-13T08:06:18Z) - DuAT: Dual-Aggregation Transformer Network for Medical Image
Segmentation [21.717520350930705]
Transformer-based models have been widely demonstrated to be successful in computer vision tasks.
However, they are often dominated by features of large patterns leading to the loss of local details.
We propose a Dual-Aggregation Transformer Network called DuAT, which is characterized by two innovative designs.
Our proposed model outperforms state-of-the-art methods in the segmentation of skin lesion images, and polyps in colonoscopy images.
arXiv Detail & Related papers (2022-12-21T07:54:02Z) - Dual Attention GANs for Semantic Image Synthesis [101.36015877815537]
We propose a novel Dual Attention GAN (DAGAN) to synthesize photo-realistic and semantically-consistent images.
We also propose two novel modules, i.e., position-wise Spatial Attention Module (SAM) and scale-wise Channel Attention Module (CAM)
DAGAN achieves remarkably better results than state-of-the-art methods, while using fewer model parameters.
arXiv Detail & Related papers (2020-08-29T17:49:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.