ViewDelta: Scaling Scene Change Detection through Text-Conditioning
- URL: http://arxiv.org/abs/2412.07612v3
- Date: Wed, 13 Aug 2025 16:49:44 GMT
- Title: ViewDelta: Scaling Scene Change Detection through Text-Conditioning
- Authors: Subin Varghese, Joshua Gao, Vedhus Hoskere,
- Abstract summary: We introduce a general framework for Scene Change Detection (SCD) that addresses the core ambiguity of distinguishing "relevant" from "nuisance" changes.<n>We propose ViewDelta, a text conditioned change detection framework that uses natural language prompts to define relevant changes.<n>Our code and dataset are available at https://joshuakgao.io/viewdelta/.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a generalized framework for Scene Change Detection (SCD) that addresses the core ambiguity of distinguishing "relevant" from "nuisance" changes, enabling effective joint training of a single model across diverse domains and applications. Existing methods struggle to generalize due to differences in dataset labeling, where changes such as vegetation growth or lane marking alterations may be labeled as relevant in one dataset and irrelevant in another. To resolve this ambiguity, we propose ViewDelta, a text conditioned change detection framework that uses natural language prompts to define relevant changes precisely, such as a single attribute, a specific set of classes, or all observable differences. To facilitate training in this paradigm, we release the Conditional Change Segmentation dataset (CSeg), the first large-scale synthetic dataset for text conditioned SCD, consisting of over 500,000 image pairs with more than 300,000 unique textual prompts describing relevant changes. Experiments demonstrate that a single ViewDelta model trained jointly on CSeg, SYSU-CD, PSCD, VL-CMU-CD, and their unaligned variants achieves performance competitive with or superior to dataset specific models, highlighting text conditioning as a powerful approach for generalizable SCD. Our code and dataset are available at https://joshuakgao.github.io/viewdelta/.
Related papers
- UniVCD: A New Method for Unsupervised Change Detection in the Open-Vocabulary Era [0.0]
Change detection (CD) identifies scene changes from multi-temporal observations and is widely used in urban development and environmental monitoring.<n>Most existing CD methods rely on supervised learning, making performance strongly dataset-dependent and incurring high annotation costs.<n>We propose Unified Open-Vocabulary Change Detection (UniVCD), an unsupervised, open-vocabulary change detection method built on frozen SAM2 and CLIP.
arXiv Detail & Related papers (2025-12-15T08:42:23Z) - Referring Change Detection in Remote Sensing Imagery [49.841833753558575]
We introduce Referring Change Detection (RCD), which leverages natural language prompts to detect specific classes of changes in remote sensing images.<n>We propose a two-stage framework consisting of (I) textbfRCDNet, a cross-modal fusion network designed for referring change detection, and (II) textbfRCDGen, a diffusion-based synthetic data generation pipeline.
arXiv Detail & Related papers (2025-12-12T16:57:12Z) - UniChange: Unifying Change Detection with Multimodal Large Language Model [17.98018484822312]
Change detection (CD) is a fundamental task for monitoring and analyzing land cover dynamics.<n>Current models typically acquire limited knowledge from single-type annotated data.<n>We develop UniChange to leverage diverse binary change detection (BCD) and semantic change (SCD) datasets.
arXiv Detail & Related papers (2025-11-04T14:31:06Z) - FoBa: A Foreground-Background co-Guided Method and New Benchmark for Remote Sensing Semantic Change Detection [48.06921153684768]
We present a new benchmark for remote sensing semantic change detection (SCD) called LevirSCD.<n>The dataset covers 16 change categories and 210 specific change types, with more fine-grained class definitions.<n>We propose a foreground-background co-guided SCD (FoBa) method, which leverages foregrounds enriched with contextual information to guide the model.<n>FoBa achieves competitive results compared to current SOTA methods, with improvements of 1.48%, 3.61%, and 2.81% in the SeK metric, respectively.
arXiv Detail & Related papers (2025-09-19T09:19:57Z) - DeltaVLM: Interactive Remote Sensing Image Change Analysis via Instruction-guided Difference Perception [0.846600473226587]
We introduce remote sensing image change analysis (RSICA) as a new paradigm that combines the strengths of change detection and visual question answering.<n>We propose DeltaVLM, an end-to-end architecture tailored for interactive RSICA.<n>DeltaVLM features three innovations: (1) a fine-tuned bi-temporal vision encoder to capture temporal differences; (2) a visual difference perception module with a cross-semantic relation measuring mechanism to interpret changes; and (3) an instruction-guided Q-former to effectively extract query-relevant difference information.
arXiv Detail & Related papers (2025-07-30T03:14:27Z) - ControlThinker: Unveiling Latent Semantics for Controllable Image Generation through Visual Reasoning [76.2503352325492]
ControlThinker is a novel framework that employs a "comprehend-then-generate" paradigm.<n>Latent semantics from control images are mined to enrich text prompts.<n>This enriched semantic understanding then seamlessly aids in image generation without the need for additional complex modifications.
arXiv Detail & Related papers (2025-06-04T05:56:19Z) - DILLEMA: Diffusion and Large Language Models for Multi-Modal Augmentation [0.13124513975412253]
We present a novel framework for testing vision neural networks that leverages Large Language Models and control-conditioned Diffusion Models.
Our approach begins by translating images into detailed textual descriptions using a captioning model.
These descriptions are then used to produce new test images through a text-to-image diffusion process.
arXiv Detail & Related papers (2025-02-05T16:35:42Z) - Detect Changes like Humans: Incorporating Semantic Priors for Improved Change Detection [52.62459671461816]
This paper explores incorporating semantic priors from visual foundation models to improve the ability to detect changes.<n>Inspired by the human visual paradigm, a novel dual-stream feature decoder is derived to distinguish changes by combining semantic-aware features and difference-aware features.
arXiv Detail & Related papers (2024-12-22T08:27:15Z) - ZeroSCD: Zero-Shot Street Scene Change Detection [2.3020018305241337]
Scene Change Detection is a challenging task in computer vision and robotics.
Traditional change detection methods rely on training models that take these image pairs as input and estimate the changes.
We propose ZeroSCD, a zero-shot scene change detection framework that eliminates the need for training.
arXiv Detail & Related papers (2024-09-23T17:53:44Z) - Zero-Shot Scene Change Detection [14.095215136905553]
Our method takes advantage of the change detection effect of the tracking model by inputting reference and query images instead of consecutive frames.
We extend our approach to video to exploit rich temporal information, enhancing scene change detection performance.
arXiv Detail & Related papers (2024-06-17T05:03:44Z) - Dual-Image Enhanced CLIP for Zero-Shot Anomaly Detection [58.228940066769596]
We introduce a Dual-Image Enhanced CLIP approach, leveraging a joint vision-language scoring system.
Our methods process pairs of images, utilizing each as a visual reference for the other, thereby enriching the inference process with visual context.
Our approach significantly exploits the potential of vision-language joint anomaly detection and demonstrates comparable performance with current SOTA methods across various datasets.
arXiv Detail & Related papers (2024-05-08T03:13:20Z) - Language Guided Domain Generalized Medical Image Segmentation [68.93124785575739]
Single source domain generalization holds promise for more reliable and consistent image segmentation across real-world clinical settings.
We propose an approach that explicitly leverages textual information by incorporating a contrastive learning mechanism guided by the text encoder features.
Our approach achieves favorable performance against existing methods in literature.
arXiv Detail & Related papers (2024-04-01T17:48:15Z) - Improving Diversity in Zero-Shot GAN Adaptation with Semantic Variations [61.132408427908175]
zero-shot GAN adaptation aims to reuse well-trained generators to synthesize images of an unseen target domain.
With only a single representative text feature instead of real images, the synthesized images gradually lose diversity.
We propose a novel method to find semantic variations of the target text in the CLIP space.
arXiv Detail & Related papers (2023-08-21T08:12:28Z) - Bilevel Fast Scene Adaptation for Low-Light Image Enhancement [50.639332885989255]
Enhancing images in low-light scenes is a challenging but widely concerned task in the computer vision.
Main obstacle lies in the modeling conundrum from distribution discrepancy across different scenes.
We introduce the bilevel paradigm to model the above latent correspondence.
A bilevel learning framework is constructed to endow the scene-irrelevant generality of the encoder towards diverse scenes.
arXiv Detail & Related papers (2023-06-02T08:16:21Z) - Diversify Your Vision Datasets with Automatic Diffusion-Based
Augmentation [66.6546668043249]
ALIA (Automated Language-guided Image Augmentation) is a method which utilizes large vision and language models to automatically generate natural language descriptions of a dataset's domains.
To maintain data integrity, a model trained on the original dataset filters out minimal image edits and those which corrupt class-relevant information.
We show that ALIA is able to surpasses traditional data augmentation and text-to-image generated data on fine-grained classification tasks.
arXiv Detail & Related papers (2023-05-25T17:43:05Z) - Vision Transformer with Quadrangle Attention [76.35955924137986]
We propose a novel quadrangle attention (QA) method that extends the window-based attention to a general quadrangle formulation.
Our method employs an end-to-end learnable quadrangle regression module that predicts a transformation matrix to transform default windows into target quadrangles.
We integrate QA into plain and hierarchical vision transformers to create a new architecture named QFormer, which offers minor code modifications and negligible extra computational cost.
arXiv Detail & Related papers (2023-03-27T11:13:50Z) - Neighborhood Contrastive Transformer for Change Captioning [80.10836469177185]
We propose a neighborhood contrastive transformer to improve the model's perceiving ability for various changes under different scenes.
The proposed method achieves the state-of-the-art performance on three public datasets with different change scenarios.
arXiv Detail & Related papers (2023-03-06T14:39:54Z) - Adversarial Virtual Exemplar Learning for Label-Frugal Satellite Image
Change Detection [12.18340575383456]
In this paper, we investigate satellite image change detection using active learning.
Our method is interactive and relies on a question and answer model which asks the oracle (user) questions about the most informative display.
The main contribution of our method consists in a novel adversarial model that allows frugally probing the oracle with only the most representative, diverse and uncertain virtual exemplars.
arXiv Detail & Related papers (2022-12-28T17:46:20Z) - Self-Pair: Synthesizing Changes from Single Source for Object Change
Detection in Remote Sensing Imagery [6.586756080460231]
We train a change detector using two spatially unrelated images with corresponding semantic labels such as building.
We show that manipulating the source image as an after-image is crucial to the performance of change detection.
Our method outperforms existing methods based on single-temporal supervision.
arXiv Detail & Related papers (2022-12-20T13:26:42Z) - SceneComposer: Any-Level Semantic Image Synthesis [80.55876413285587]
We propose a new framework for conditional image synthesis from semantic layouts of any precision levels.
The framework naturally reduces to text-to-image (T2I) at the lowest level with no shape information, and it becomes segmentation-to-image (S2I) at the highest level.
We introduce several novel techniques to address the challenges coming with this new setup.
arXiv Detail & Related papers (2022-11-21T18:59:05Z) - The Change You Want to See [91.3755431537592]
Given two images of the same scene, being able to automatically detect the changes in them has practical applications in a variety of domains.
We tackle the change detection problem with the goal of detecting "object-level" changes in an image pair despite differences in their viewpoint and illumination.
arXiv Detail & Related papers (2022-09-28T18:10:09Z) - Simple Open-Vocabulary Object Detection with Vision Transformers [51.57562920090721]
We propose a strong recipe for transferring image-text models to open-vocabulary object detection.
We use a standard Vision Transformer architecture with minimal modifications, contrastive image-text pre-training, and end-to-end detection fine-tuning.
We provide the adaptation strategies and regularizations needed to attain very strong performance on zero-shot text-conditioned and one-shot image-conditioned object detection.
arXiv Detail & Related papers (2022-05-12T17:20:36Z) - ObjectFormer for Image Manipulation Detection and Localization [118.89882740099137]
We propose ObjectFormer to detect and localize image manipulations.
We extract high-frequency features of the images and combine them with RGB features as multimodal patch embeddings.
We conduct extensive experiments on various datasets and the results verify the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-03-28T12:27:34Z) - Frugal Learning of Virtual Exemplars for Label-Efficient Satellite Image
Change Detection [12.18340575383456]
In this paper, we devise a novel interactive satellite image change detection algorithm based on active learning.
The proposed framework is iterative and relies on a question and answer model which asks the oracle (user) questions about the most informative display.
The contribution of our framework resides in a novel display model which selects the most representative and diverse virtual exemplars.
arXiv Detail & Related papers (2022-03-22T09:29:42Z) - Region-level Active Learning for Cluttered Scenes [60.93811392293329]
We introduce a new strategy that subsumes previous Image-level and Object-level approaches into a generalized, Region-level approach.
We show that this approach significantly decreases labeling effort and improves rare object search on realistic data with inherent class-imbalance and cluttered scenes.
arXiv Detail & Related papers (2021-08-20T14:02:38Z) - Unsupervised Change Detection in Satellite Images with Generative
Adversarial Network [20.81970476609318]
We propose a novel change detection framework utilizing a special neural network architecture -- Generative Adversarial Network (GAN) to generate better coregistered images.
The optimized GAN model would produce better coregistered images where changes can be easily spotted and then the change map can be presented through a comparison strategy.
arXiv Detail & Related papers (2020-09-08T10:26:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.