Time-based Mapping of Space Using Visual Motion Invariants
- URL: http://arxiv.org/abs/2310.09632v1
- Date: Sat, 14 Oct 2023 17:55:49 GMT
- Title: Time-based Mapping of Space Using Visual Motion Invariants
- Authors: Juan D. Yepes, Daniel Raviv
- Abstract summary: This paper focuses on visual motion-based invariants that result in a representation of 3D points in which the stationary environment remains invariant.
We refer to the resulting optical flow-based invariants as 'Time-Clearance' and the well-known 'Time-to-Contact'
We present simulations of a camera moving relative to a 3D object, snapshots of its projected images captured by a rectilinearly moving camera, and the object as it appears unchanged in the new domain over time.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper focuses on visual motion-based invariants that result in a
representation of 3D points in which the stationary environment remains
invariant, ensuring shape constancy. This is achieved even as the images
undergo constant change due to camera motion. Nonlinear functions of measurable
optical flow, which are related to geometric 3D invariants, are utilized to
create a novel representation. We refer to the resulting optical flow-based
invariants as 'Time-Clearance' and the well-known 'Time-to-Contact' (TTC).
Since these invariants remain constant over time, it becomes straightforward to
detect moving points that do not adhere to the expected constancy. We present
simulations of a camera moving relative to a 3D object, snapshots of its
projected images captured by a rectilinearly moving camera, and the object as
it appears unchanged in the new domain over time. In addition, Unity-based
simulations demonstrate color-coded transformations of a projected 3D scene,
illustrating how moving objects can be readily identified. This representation
is straightforward, relying on simple optical flow functions. It requires only
one camera, and there is no need to determine the magnitude of the camera's
velocity vector. Furthermore, the representation is pixel-based, making it
suitable for parallel processing.
Related papers
- V3D-SLAM: Robust RGB-D SLAM in Dynamic Environments with 3D Semantic Geometry Voting [1.3493547928462395]
Simultaneous localization and mapping (SLAM) in highly dynamic environments is challenging due to the correlation between moving objects and the camera pose.
We propose a robust method, V3D-SLAM, to remove moving objects via two lightweight re-evaluation stages.
Our experiment on the TUM RGB-D benchmark on dynamic sequences with ground-truth camera trajectories showed that our methods outperform the most recent state-of-the-art SLAM methods.
arXiv Detail & Related papers (2024-10-15T21:08:08Z) - Invariant-based Mapping of Space During General Motion of an Observer [0.0]
This paper explores visual motion-based invariants, resulting in a new instantaneous domain.
We make use of nonlinear functions derived from measurable optical flow, which are linked to geometric 3D invariants.
We present simulations involving a camera that translates and rotates relative to a 3D object, capturing snapshots of the camera projected images.
arXiv Detail & Related papers (2023-11-18T17:40:35Z) - Detecting Moving Objects Using a Novel Optical-Flow-Based
Range-Independent Invariant [0.0]
We present an optical-flow-based transformation that yields a consistent 2D invariant image output regardless of time instants, range of points in 3D, and the speed of the camera.
In the new domain, projections of 3D points that deviate from the values of the predefined lookup image can be clearly identified as moving relative to the stationary 3D environment.
arXiv Detail & Related papers (2023-10-14T17:42:19Z) - SceNeRFlow: Time-Consistent Reconstruction of General Dynamic Scenes [75.9110646062442]
We propose SceNeRFlow to reconstruct a general, non-rigid scene in a time-consistent manner.
Our method takes multi-view RGB videos and background images from static cameras with known camera parameters as input.
We show experimentally that, unlike prior work that only handles small motion, our method enables the reconstruction of studio-scale motions.
arXiv Detail & Related papers (2023-08-16T09:50:35Z) - InstMove: Instance Motion for Object-centric Video Segmentation [70.16915119724757]
In this work, we study the instance-level motion and present InstMove, which stands for Instance Motion for Object-centric Video.
In comparison to pixel-wise motion, InstMove mainly relies on instance-level motion information that is free from image feature embeddings.
With only a few lines of code, InstMove can be integrated into current SOTA methods for three different video segmentation tasks.
arXiv Detail & Related papers (2023-03-14T17:58:44Z) - ParticleSfM: Exploiting Dense Point Trajectories for Localizing Moving
Cameras in the Wild [57.37891682117178]
We present a robust dense indirect structure-from-motion method for videos that is based on dense correspondence from pairwise optical flow.
A novel neural network architecture is proposed for processing irregular point trajectory data.
Experiments on MPI Sintel dataset show that our system produces significantly more accurate camera trajectories.
arXiv Detail & Related papers (2022-07-19T09:19:45Z) - Motion-from-Blur: 3D Shape and Motion Estimation of Motion-blurred
Objects in Videos [115.71874459429381]
We propose a method for jointly estimating the 3D motion, 3D shape, and appearance of highly motion-blurred objects from a video.
Experiments on benchmark datasets demonstrate that our method outperforms previous methods for fast moving object deblurring and 3D reconstruction.
arXiv Detail & Related papers (2021-11-29T11:25:14Z) - Shape from Blur: Recovering Textured 3D Shape and Motion of Fast Moving
Objects [115.71874459429381]
We address the novel task of jointly reconstructing the 3D shape, texture, and motion of an object from a single motion-blurred image.
While previous approaches address the deblurring problem only in the 2D image domain, our proposed rigorous modeling of all object properties in the 3D domain enables the correct description of arbitrary object motion.
arXiv Detail & Related papers (2021-06-16T13:18:08Z) - Geometric Moment Invariants to Motion Blur [4.8915390363596005]
We focus on removing interference of motion blur by the derivation of motion blur invariants.
Based on geometric moment and mathematical model of motion blur, we prove that geometric moments of blurred image and original image are linearly related.
Surprisingly, we find some geometric moment invariants are invariants to not only spatial transform but also motion blur.
arXiv Detail & Related papers (2021-01-21T14:50:34Z) - Event-based Motion Segmentation with Spatio-Temporal Graph Cuts [51.17064599766138]
We have developed a method to identify independently objects acquired with an event-based camera.
The method performs on par or better than the state of the art without having to predetermine the number of expected moving objects.
arXiv Detail & Related papers (2020-12-16T04:06:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.