C-NERF: Representing Scene Changes as Directional Consistency
Difference-based NeRF
- URL: http://arxiv.org/abs/2312.02751v2
- Date: Sat, 23 Dec 2023 07:12:54 GMT
- Title: C-NERF: Representing Scene Changes as Directional Consistency
Difference-based NeRF
- Authors: Rui Huang (1), Binbin Jiang (1), Qingyi Zhao (1), William Wang,
Yuxiang Zhang (1), Qing Guo (2 and 3) ((1) College of Computer Science and
Technology, Civil Aviation University of China, China, (2) IHPC, Agency for
Science, Technology and Research, Singapore, (3) CFAR, Agency for Science,
Technology and Research, Singapore)
- Abstract summary: We aim to detect the changes caused by object variations in a scene represented by the neural radiance fields (NeRFs)
Given an arbitrary view and two sets of scene images captured at different timestamps, we can predict the scene changes in that view.
Our approach surpasses state-of-the-art 2D change detection and NeRF-based methods by a significant margin.
- Score: 3.0023333254953406
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we aim to detect the changes caused by object variations in a
scene represented by the neural radiance fields (NeRFs). Given an arbitrary
view and two sets of scene images captured at different timestamps, we can
predict the scene changes in that view, which has significant potential
applications in scene monitoring and measuring. We conducted preliminary
studies and found that such an exciting task cannot be easily achieved by
utilizing existing NeRFs and 2D change detection methods with many false or
missing detections. The main reason is that the 2D change detection is based on
the pixel appearance difference between spatial-aligned image pairs and
neglects the stereo information in the NeRF. To address the limitations, we
propose the C-NERF to represent scene changes as directional consistency
difference-based NeRF, which mainly contains three modules. We first perform
the spatial alignment of two NeRFs captured before and after changes. Then, we
identify the change points based on the direction-consistent constraint; that
is, real change points have similar change representations across view
directions, but fake change points do not. Finally, we design the change map
rendering process based on the built NeRFs and can generate the change map of
an arbitrarily specified view direction. To validate the effectiveness, we
build a new dataset containing ten scenes covering diverse scenarios with
different changing objects. Our approach surpasses state-of-the-art 2D change
detection and NeRF-based methods by a significant margin.
Related papers
- NeRFDeformer: NeRF Transformation from a Single View via 3D Scene Flows [60.291277312569285]
We present a method for automatically modifying a NeRF representation based on a single observation.
Our method defines the transformation as a 3D flow, specifically as a weighted linear blending of rigid transformations.
We also introduce a new dataset for exploring the problem of modifying a NeRF scene through a single observation.
arXiv Detail & Related papers (2024-06-15T07:58:08Z) - ChangeBind: A Hybrid Change Encoder for Remote Sensing Change Detection [16.62779899494721]
Change detection (CD) is a fundamental task in remote sensing (RS) which aims to detect the semantic changes between the same geographical regions at different time stamps.
We propose an effective Siamese-based framework to encode the semantic changes occurring in the bi-temporal RS images.
arXiv Detail & Related papers (2024-04-26T17:47:14Z) - VcT: Visual change Transformer for Remote Sensing Image Change Detection [16.778418602705287]
We propose a novel Visual change Transformer (VcT) model for visual change detection problem.
Top-K reliable tokens can be mined from the map and refined by using the clustering algorithm.
Extensive experiments on multiple benchmark datasets validated the effectiveness of our proposed VcT model.
arXiv Detail & Related papers (2023-10-17T17:25:31Z) - Learning Transformations To Reduce the Geometric Shift in Object
Detection [60.20931827772482]
We tackle geometric shifts emerging from variations in the image capture process.
We introduce a self-training approach that learns a set of geometric transformations to minimize these shifts.
We evaluate our method on two different shifts, i.e., a camera's field of view (FoV) change and a viewpoint change.
arXiv Detail & Related papers (2023-01-13T11:55:30Z) - GazeNeRF: 3D-Aware Gaze Redirection with Neural Radiance Fields [100.53114092627577]
Existing gaze redirection methods operate on 2D images and struggle to generate 3D consistent results.
We build on the intuition that the face region and eyeballs are separate 3D structures that move in a coordinated yet independent fashion.
arXiv Detail & Related papers (2022-12-08T13:19:11Z) - The Change You Want to See [91.3755431537592]
Given two images of the same scene, being able to automatically detect the changes in them has practical applications in a variety of domains.
We tackle the change detection problem with the goal of detecting "object-level" changes in an image pair despite differences in their viewpoint and illumination.
arXiv Detail & Related papers (2022-09-28T18:10:09Z) - dual unet:a novel siamese network for change detection with cascade
differential fusion [4.651756476458979]
We propose a novel Siamese neural network for change detection task, namely Dual-UNet.
In contrast to previous individually encoded the bitemporal images, we design an encoder differential-attention module to focus on the spatial difference relationships of pixels.
Experiments demonstrate that the proposed approach consistently outperforms the most advanced methods on popular seasonal change detection datasets.
arXiv Detail & Related papers (2022-08-12T14:24:09Z) - City-scale Scene Change Detection using Point Clouds [71.73273007900717]
We propose a method for detecting structural changes in a city using images captured from mounted cameras over two different times.
A direct comparison of the two point clouds for change detection is not ideal due to inaccurate geo-location information.
To circumvent this problem, we propose a deep learning-based non-rigid registration on the point clouds.
Experiments show that our method is able to detect scene changes effectively, even in the presence of viewpoint and illumination differences.
arXiv Detail & Related papers (2021-03-26T08:04:13Z) - D-NeRF: Neural Radiance Fields for Dynamic Scenes [72.75686949608624]
We introduce D-NeRF, a method that extends neural radiance fields to a dynamic domain.
D-NeRF reconstructs images of objects under rigid and non-rigid motions from a camera moving around the scene.
We demonstrate the effectiveness of our approach on scenes with objects under rigid, articulated and non-rigid motions.
arXiv Detail & Related papers (2020-11-27T19:06:50Z) - DASNet: Dual attentive fully convolutional siamese networks for change
detection of high resolution satellite images [17.839181739760676]
The research objective is to identity the change information of interest and filter out the irrelevant change information as interference factors.
Recently, the rise of deep learning has provided new tools for change detection, which have yielded impressive results.
We propose a new method, namely, dual attentive fully convolutional Siamese networks (DASNet) for change detection in high-resolution images.
arXiv Detail & Related papers (2020-03-07T16:57:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.