Removing Dynamic Objects for Static Scene Reconstruction using Light
Fields
- URL: http://arxiv.org/abs/2003.11076v1
- Date: Tue, 24 Mar 2020 19:05:17 GMT
- Title: Removing Dynamic Objects for Static Scene Reconstruction using Light
Fields
- Authors: Pushyami Kaveti, Sammie Katt, Hanumant Singh
- Abstract summary: Dynamic environments pose challenges to visual simultaneous localization and mapping (SLAM) algorithms.
Light Fields capture a bundle of light rays emerging from a single point in space, allowing us to see through dynamic objects by refocusing past them.
We present a method to synthesize a refocused image of the static background in the presence of dynamic objects.
- Score: 2.286041284499166
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: There is a general expectation that robots should operate in environments
that consist of static and dynamic entities including people, furniture and
automobiles. These dynamic environments pose challenges to visual simultaneous
localization and mapping (SLAM) algorithms by introducing errors into the
front-end. Light fields provide one possible method for addressing such
problems by capturing a more complete visual information of a scene. In
contrast to a single ray from a perspective camera, Light Fields capture a
bundle of light rays emerging from a single point in space, allowing us to see
through dynamic objects by refocusing past them.
In this paper we present a method to synthesize a refocused image of the
static background in the presence of dynamic objects that uses a light-field
acquired with a linear camera array. We simultaneously estimate both the depth
and the refocused image of the static scene using semantic segmentation for
detecting dynamic objects in a single time step. This eliminates the need for
initializing a static map . The algorithm is parallelizable and is implemented
on GPU allowing us execute it at close to real time speeds. We demonstrate the
effectiveness of our method on real-world data acquired using a small robot
with a five camera array.
Related papers
- OmniLocalRF: Omnidirectional Local Radiance Fields from Dynamic Videos [14.965321452764355]
We introduce a new approach called Omnidirectional Local Radiance Fields (OmniLocalRF) that can render static-only scene views.
Our approach combines the principles of local radiance fields with the bidirectional optimization of omnidirectional rays.
Our experiments validate that OmniLocalRF outperforms existing methods in both qualitative and quantitative metrics.
arXiv Detail & Related papers (2024-03-31T12:55:05Z) - Semantic Attention Flow Fields for Monocular Dynamic Scene Decomposition [51.67493993845143]
We reconstruct a neural volume that captures time-varying color, density, scene flow, semantics, and attention information.
The semantics and attention let us identify salient foreground objects separately from the background across spacetime.
We show that this method can decompose dynamic scenes in an unsupervised way with competitive performance to a supervised method.
arXiv Detail & Related papers (2023-03-02T19:00:05Z) - DynIBaR: Neural Dynamic Image-Based Rendering [79.44655794967741]
We address the problem of synthesizing novel views from a monocular video depicting a complex dynamic scene.
We adopt a volumetric image-based rendering framework that synthesizes new viewpoints by aggregating features from nearby views.
We demonstrate significant improvements over state-of-the-art methods on dynamic scene datasets.
arXiv Detail & Related papers (2022-11-20T20:57:02Z) - D$^2$NeRF: Self-Supervised Decoupling of Dynamic and Static Objects from
a Monocular Video [23.905013304668426]
Given a monocular video, segmenting and decoupling dynamic objects while recovering the static environment is a widely studied problem in machine intelligence.
We introduce Decoupled Dynamic Neural Radiance Field (D$2$NeRF), a self-supervised approach that takes a monocular video and learns a 3D scene representation.
arXiv Detail & Related papers (2022-05-31T14:41:24Z) - Discovering Objects that Can Move [55.743225595012966]
We study the problem of object discovery -- separating objects from the background without manual labels.
Existing approaches utilize appearance cues, such as color, texture, and location, to group pixels into object-like regions.
We choose to focus on dynamic objects -- entities that can move independently in the world.
arXiv Detail & Related papers (2022-03-18T21:13:56Z) - TwistSLAM: Constrained SLAM in Dynamic Environment [0.0]
We present TwistSLAM, a semantic, dynamic, stereo SLAM system that can track dynamic objects in the scene.
Our algorithm creates clusters of points according to their semantic class.
It uses the static parts of the environment to robustly localize the camera and tracks the remaining objects.
arXiv Detail & Related papers (2022-02-24T22:08:45Z) - Event-based Motion Segmentation with Spatio-Temporal Graph Cuts [51.17064599766138]
We have developed a method to identify independently objects acquired with an event-based camera.
The method performs on par or better than the state of the art without having to predetermine the number of expected moving objects.
arXiv Detail & Related papers (2020-12-16T04:06:02Z) - D-NeRF: Neural Radiance Fields for Dynamic Scenes [72.75686949608624]
We introduce D-NeRF, a method that extends neural radiance fields to a dynamic domain.
D-NeRF reconstructs images of objects under rigid and non-rigid motions from a camera moving around the scene.
We demonstrate the effectiveness of our approach on scenes with objects under rigid, articulated and non-rigid motions.
arXiv Detail & Related papers (2020-11-27T19:06:50Z) - Empty Cities: a Dynamic-Object-Invariant Space for Visual SLAM [6.693607456009373]
We present a data-driven approach to obtain the static image of a scene, eliminating dynamic objects that might have been present at the time of traversing the scene with a camera.
We introduce an end-to-end deep learning framework to turn images of an urban environment into realistic static frames suitable for localization and mapping.
arXiv Detail & Related papers (2020-10-15T10:31:12Z) - DOT: Dynamic Object Tracking for Visual SLAM [83.69544718120167]
DOT combines instance segmentation and multi-view geometry to generate masks for dynamic objects.
To determine which objects are actually moving, DOT segments first instances of potentially dynamic objects and then, with the estimated camera motion, tracks such objects by minimizing the photometric reprojection error.
Our results show that our approach improves significantly the accuracy and robustness of ORB-SLAM 2, especially in highly dynamic scenes.
arXiv Detail & Related papers (2020-09-30T18:36:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.