Does the Manipulation Process Matter? RITA: Reasoning Composite Image Manipulations via Reversely-Ordered Incremental-Transition Autoregression
- URL: http://arxiv.org/abs/2509.20006v2
- Date: Thu, 25 Sep 2025 01:49:11 GMT
- Title: Does the Manipulation Process Matter? RITA: Reasoning Composite Image Manipulations via Reversely-Ordered Incremental-Transition Autoregression
- Authors: Xuekang Zhu, Ji-Zhe Zhou, Kaiwen Feng, Chenfan Qu, Yunfei Wang, Liting Zhou, Jian Liu,
- Abstract summary: We reformulate image manipulation localization as a conditional sequence prediction task, proposing the RITA framework.<n>RITA predicts manipulated regions layer-by-layer in an ordered manner, using each step's prediction as the condition for the next.<n>To enable training and evaluation, we synthesize multi-step manipulation data and construct a new benchmark HSIM.
- Score: 13.933194190556714
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image manipulations often entail a complex manipulation process, comprising a series of editing operations to create a deceptive image, exhibiting sequentiality and hierarchical characteristics. However, existing IML methods remain manipulation-process-agnostic, directly producing localization masks in a one-shot prediction paradigm without modeling the underlying editing steps. This one-shot paradigm compresses the high-dimensional compositional space into a single binary mask, inducing severe dimensional collapse, thereby creating a fundamental mismatch with the intrinsic nature of the IML task. To address this, we are the first to reformulate image manipulation localization as a conditional sequence prediction task, proposing the RITA framework. RITA predicts manipulated regions layer-by-layer in an ordered manner, using each step's prediction as the condition for the next, thereby explicitly modeling temporal dependencies and hierarchical structures among editing operations. To enable training and evaluation, we synthesize multi-step manipulation data and construct a new benchmark HSIM. We further propose the HSS metric to assess sequential order and hierarchical alignment. Extensive experiments show RITA achieves SOTA on traditional benchmarks and provides a solid foundation for the novel hierarchical localization task, validating its potential as a general and effective paradigm. The code and dataset will be publicly available.
Related papers
- Spanning Tree Autoregressive Visual Generation [51.7635842702602]
We present Spanning Tree Autoregressive (STAR) modeling, which can incorporate prior knowledge of images, such as center bias and locality, to maintain sampling performance.
arXiv Detail & Related papers (2025-11-21T09:45:17Z) - XY-Cut++: Advanced Layout Ordering via Hierarchical Mask Mechanism on a Novel Benchmark [1.9020548287019097]
XY-Cut++ is a layout ordering method that integrates pre-mask processing, multi-granularity segmentation, and cross-modal matching.<n>It achieves state-of-the-art performance (98.8 BLEU overall) while maintaining simplicity and efficiency.
arXiv Detail & Related papers (2025-04-14T14:19:57Z) - Context-Aware Weakly Supervised Image Manipulation Localization with SAM Refinement [52.15627062770557]
Malicious image manipulation poses societal risks, increasing the importance of effective image manipulation detection methods.<n>Recent approaches in image manipulation detection have largely been driven by fully supervised approaches.<n>We present a novel weakly supervised framework based on a dual-branch Transformer-CNN architecture.
arXiv Detail & Related papers (2025-03-26T07:35:09Z) - Feature Alignment with Equivariant Convolutions for Burst Image Super-Resolution [52.55429225242423]
We propose a novel framework for Burst Image Super-Resolution (BISR), featuring an equivariant convolution-based alignment.<n>This enables the alignment transformation to be learned via explicit supervision in the image domain and easily applied in the feature domain.<n>Experiments on BISR benchmarks show the superior performance of our approach in both quantitative metrics and visual quality.
arXiv Detail & Related papers (2025-03-11T11:13:10Z) - Task Success Prediction for Open-Vocabulary Manipulation Based on Multi-Level Aligned Representations [1.1650821883155187]
We propose Contrastive $lambda$-Repformer, which predicts task success for table-top manipulation tasks by aligning images with instruction sentences.
Our method integrates the following three key types of features into a multi-level aligned representation.
We evaluate Contrastive $lambda$-Repformer on a dataset based on a large-scale standard dataset, the RT-1 dataset, and on a physical robot platform.
arXiv Detail & Related papers (2024-10-01T06:35:34Z) - MaskInversion: Localized Embeddings via Optimization of Explainability Maps [49.50785637749757]
MaskInversion generates a context-aware embedding for a query image region specified by a mask at test time.
It can be used for a broad range of tasks, including open-vocabulary class retrieval, referring expression comprehension, as well as for localized captioning and image generation.
arXiv Detail & Related papers (2024-07-29T14:21:07Z) - Generalized Consistency Trajectory Models for Image Manipulation [59.576781858809355]
Diffusion models (DMs) excel in unconditional generation, as well as on applications such as image editing and restoration.<n>This work aims to unlock the full potential of consistency trajectory models (CTMs) by proposing generalized CTMs (GCTMs)<n>We discuss the design space of GCTMs and demonstrate their efficacy in various image manipulation tasks such as image-to-image translation, restoration, and editing.
arXiv Detail & Related papers (2024-03-19T07:24:54Z) - Tuning-Free Inversion-Enhanced Control for Consistent Image Editing [44.311286151669464]
We present a novel approach called Tuning-free Inversion-enhanced Control (TIC)
TIC correlates features from the inversion process with those from the sampling process to mitigate the inconsistency in DDIM reconstruction.
We also propose a mask-guided attention concatenation strategy that combines contents from both the inversion and the naive DDIM editing processes.
arXiv Detail & Related papers (2023-12-22T11:13:22Z) - MAGIC: Mask-Guided Image Synthesis by Inverting a Quasi-Robust
Classifier [37.774220727662914]
We propose a one-shot mask-guided image synthesis that allows controlling manipulations of a single image.
Our proposed method, entitled MAGIC, leverages structured gradients from a pre-trained quasi-robust classifier.
MAGIC aggregates gradients over the input, driven by a guide binary mask that enforces a strong, spatial prior.
arXiv Detail & Related papers (2022-09-23T12:15:40Z) - Semantic Image Synthesis via Diffusion Models [174.24523061460704]
Denoising Diffusion Probabilistic Models (DDPMs) have achieved remarkable success in various image generation tasks.<n>Recent work on semantic image synthesis mainly follows the de facto GAN-based approaches.<n>We propose a novel framework based on DDPM for semantic image synthesis.
arXiv Detail & Related papers (2022-06-30T18:31:51Z) - Bi-Granularity Contrastive Learning for Post-Training in Few-Shot Scene [10.822477939237459]
We propose contrastive masked language modeling (CMLM) for post-training to integrate both token-level and sequence-level contrastive learnings.
CMLM surpasses several recent post-training methods in few-shot settings without the need for data augmentation.
arXiv Detail & Related papers (2021-06-04T08:17:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.