ASemConsist: Adaptive Semantic Feature Control for Training-Free Identity-Consistent Generation
- URL: http://arxiv.org/abs/2512.23245v2
- Date: Fri, 02 Jan 2026 08:13:56 GMT
- Title: ASemConsist: Adaptive Semantic Feature Control for Training-Free Identity-Consistent Generation
- Authors: Shin Seong Kim, Minjung Shin, Hyunin Cho, Youngjung Uh,
- Abstract summary: ASemconsist enables explicit semantic control over character identity without sacrificing prompt alignment.<n>Our framework achieves state-of-the-art performance, effectively overcoming prior trade-offs.
- Score: 14.341691123354195
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent text-to-image diffusion models have significantly improved visual quality and text alignment. However, generating a sequence of images while preserving consistent character identity across diverse scene descriptions remains a challenging task. Existing methods often struggle with a trade-off between maintaining identity consistency and ensuring per-image prompt alignment. In this paper, we introduce a novel framework, ASemconsist, that addresses this challenge through selective text embedding modification, enabling explicit semantic control over character identity without sacrificing prompt alignment. Furthermore, based on our analysis of padding embeddings in FLUX, we propose a semantic control strategy that repurposes padding embeddings as semantic containers. Additionally, we introduce an adaptive feature-sharing strategy that automatically evaluates textual ambiguity and applies constraints only to the ambiguous identity prompt. Finally, we propose a unified evaluation protocol, the Consistency Quality Score (CQS), which integrates identity preservation and per-image text alignment into a single comprehensive metric, explicitly capturing performance imbalances between the two metrics. Our framework achieves state-of-the-art performance, effectively overcoming prior trade-offs. Project page: https://minjung-s.github.io/asemconsist
Related papers
- Structured Prompt Optimization for Few-Shot Text Classification via Semantic Alignment in Latent Space [3.66782592128973]
This study addresses the issues of semantic entanglement, unclear label structure, and insufficient feature representation in few-shot text classification.<n>It proposes an optimization framework based on structured prompts to enhance semantic understanding and task adaptation under low-resource conditions.<n> Experimental results show that the proposed structured prompt optimization framework effectively alleviates semantic conflicts and label ambiguity in few-shot text classification.
arXiv Detail & Related papers (2026-02-27T07:31:16Z) - Text-Conditioned Background Generation for Editable Multi-Layer Documents [32.896370365677136]
We present a framework for document-centric background generation with multi-page editing and thematic continuity.<n>Our training-free framework produces visually coherent, text-preserving documents, bridging generative modeling with natural design.
arXiv Detail & Related papers (2025-12-19T01:10:24Z) - Geometric Disentanglement of Text Embeddings for Subject-Consistent Text-to-Image Generation using A Single Prompt [14.734857939203811]
We propose a training-free approach that addresses semantic entanglement from a subject perspective.<n>Our approach significantly improves both subject consistency and text alignment over existing baselines.
arXiv Detail & Related papers (2025-12-18T11:55:06Z) - SGDiff: Scene Graph Guided Diffusion Model for Image Collaborative SegCaptioning [53.638998508418545]
This paper introduces a new task Image Collaborative and Captioning'' (SegCaptioning)<n>SegCaptioning aims to translate a straightforward prompt, like a bounding box around an object, into diverse semantic interpretations represented by (caption, masks) pairs.<n>This task poses significant challenges, including accurately capturing a user's intention from a minimal prompt while simultaneously predicting multiple semantically aligned caption words and masks.
arXiv Detail & Related papers (2025-12-01T18:33:04Z) - Immunizing Images from Text to Image Editing via Adversarial Cross-Attention [17.498230426195114]
We propose a novel attack that targets the visual component of editing methods.<n>We introduce Attention Attack, which disrupts the cross-attention between a textual prompt and the visual representation of the image.<n> Experiments conducted on the TEDBench++ benchmark demonstrate that our attack significantly degrades editing performance while remaining imperceptible.
arXiv Detail & Related papers (2025-09-12T15:47:50Z) - Constrained Prompt Enhancement for Improving Zero-Shot Generalization of Vision-Language Models [57.357091028792325]
Vision-language models (VLMs) pre-trained on web-scale data exhibit promising zero-shot generalization but often suffer from semantic misalignment.<n>We propose a novel constrained prompt enhancement (CPE) method to improve visual-textual alignment.<n>Our approach consists of two key components: Topology-Guided Synonymous Semantic Generation (TGSSG) and Category-Agnostic Discriminative Region Selection (CADRS)
arXiv Detail & Related papers (2025-08-24T15:45:22Z) - Context-Adaptive Multi-Prompt Embedding with Large Language Models for Vision-Language Alignment [33.152772648399846]
We propose a novel approach to enrich semantic representations in vision-language contrastive learning.<n>We leverage a pretrained LLM as the text encoder within the CLIP framework, processing all prompts jointly in a single forward pass.<n>The resulting prompt embeddings are combined into a unified text representation, enabling semantically richer alignment with visual features.
arXiv Detail & Related papers (2025-08-03T20:48:43Z) - FocusDiff: Advancing Fine-Grained Text-Image Alignment for Autoregressive Visual Generation through RL [78.59912944698992]
We propose FocusDiff to enhance fine-grained text-image semantic alignment.<n>We construct a new dataset of paired texts and images with similar overall expressions but distinct local semantics.<n>Our approach achieves state-of-the-art performance on existing text-to-image benchmarks and significantly outperforms prior methods on PairComp.
arXiv Detail & Related papers (2025-06-05T18:36:33Z) - Localizing Factual Inconsistencies in Attributable Text Generation [74.11403803488643]
We introduce QASemConsistency, a new formalism for localizing factual inconsistencies in attributable text generation.<n>We show that QASemConsistency yields factual consistency scores that correlate well with human judgments.
arXiv Detail & Related papers (2024-10-09T22:53:48Z) - Open-Vocabulary Segmentation with Unpaired Mask-Text Supervision [87.15580604023555]
Unpair-Seg is a novel weakly-supervised open-vocabulary segmentation framework.
It learns from unpaired image-mask and image-text pairs, which can be independently and efficiently collected.
It achieves 14.6% and 19.5% mIoU on the ADE-847 and PASCAL Context-459 datasets.
arXiv Detail & Related papers (2024-02-14T06:01:44Z) - TextFormer: A Query-based End-to-End Text Spotter with Mixed Supervision [61.186488081379]
We propose TextFormer, a query-based end-to-end text spotter with Transformer architecture.
TextFormer builds upon an image encoder and a text decoder to learn a joint semantic understanding for multi-task modeling.
It allows for mutual training and optimization of classification, segmentation, and recognition branches, resulting in deeper feature sharing.
arXiv Detail & Related papers (2023-06-06T03:37:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.