Detecting Moving Objects Using a Novel Optical-Flow-Based
Range-Independent Invariant
- URL: http://arxiv.org/abs/2310.09627v1
- Date: Sat, 14 Oct 2023 17:42:19 GMT
- Title: Detecting Moving Objects Using a Novel Optical-Flow-Based
Range-Independent Invariant
- Authors: Daniel Raviv, Juan D. Yepes, Ayush Gowda
- Abstract summary: We present an optical-flow-based transformation that yields a consistent 2D invariant image output regardless of time instants, range of points in 3D, and the speed of the camera.
In the new domain, projections of 3D points that deviate from the values of the predefined lookup image can be clearly identified as moving relative to the stationary 3D environment.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper focuses on a novel approach for detecting moving objects during
camera motion. We present an optical-flow-based transformation that yields a
consistent 2D invariant image output regardless of time instants, range of
points in 3D, and the speed of the camera. In other words, this transformation
generates a lookup image that remains invariant despite the changing projection
of the 3D scene and camera motion. In the new domain, projections of 3D points
that deviate from the values of the predefined lookup image can be clearly
identified as moving relative to the stationary 3D environment, making them
seamlessly detectable. The method does not require prior knowledge of the
direction of motion or speed of the camera, nor does it necessitate 3D point
range information. It is well-suited for real-time parallel processing,
rendering it highly practical for implementation. We have validated the
effectiveness of the new domain through simulations and experiments,
demonstrating its robustness in scenarios involving rectilinear camera motion,
both in simulations and with real-world data. This approach introduces new ways
for moving objects detection during camera motion, and also lays the foundation
for future research in the context of moving object detection during
six-degrees-of-freedom camera motion.
Related papers
- DO3D: Self-supervised Learning of Decomposed Object-aware 3D Motion and
Depth from Monocular Videos [76.01906393673897]
We propose a self-supervised method to jointly learn 3D motion and depth from monocular videos.
Our system contains a depth estimation module to predict depth, and a new decomposed object-wise 3D motion (DO3D) estimation module to predict ego-motion and 3D object motion.
Our model delivers superior performance in all evaluated settings.
arXiv Detail & Related papers (2024-03-09T12:22:46Z) - Invariant-based Mapping of Space During General Motion of an Observer [0.0]
This paper explores visual motion-based invariants, resulting in a new instantaneous domain.
We make use of nonlinear functions derived from measurable optical flow, which are linked to geometric 3D invariants.
We present simulations involving a camera that translates and rotates relative to a 3D object, capturing snapshots of the camera projected images.
arXiv Detail & Related papers (2023-11-18T17:40:35Z) - Joint 3D Shape and Motion Estimation from Rolling Shutter Light-Field
Images [2.0277446818410994]
We propose an approach to address the problem of 3D reconstruction of scenes from a single image captured by a light-field camera equipped with a rolling shutter sensor.
Our method leverages the 3D information cues present in the light-field and the motion information provided by the rolling shutter effect.
We present a generic model for the imaging process of this sensor and a two-stage algorithm that minimizes the re-projection error.
arXiv Detail & Related papers (2023-11-02T15:08:18Z) - Time-based Mapping of Space Using Visual Motion Invariants [0.0]
This paper focuses on visual motion-based invariants that result in a representation of 3D points in which the stationary environment remains invariant.
We refer to the resulting optical flow-based invariants as 'Time-Clearance' and the well-known 'Time-to-Contact'
We present simulations of a camera moving relative to a 3D object, snapshots of its projected images captured by a rectilinearly moving camera, and the object as it appears unchanged in the new domain over time.
arXiv Detail & Related papers (2023-10-14T17:55:49Z) - Delving into Motion-Aware Matching for Monocular 3D Object Tracking [81.68608983602581]
We find that the motion cue of objects along different time frames is critical in 3D multi-object tracking.
We propose MoMA-M3T, a framework that mainly consists of three motion-aware components.
We conduct extensive experiments on the nuScenes and KITTI datasets to demonstrate our MoMA-M3T achieves competitive performance against state-of-the-art methods.
arXiv Detail & Related papers (2023-08-22T17:53:58Z) - ParticleSfM: Exploiting Dense Point Trajectories for Localizing Moving
Cameras in the Wild [57.37891682117178]
We present a robust dense indirect structure-from-motion method for videos that is based on dense correspondence from pairwise optical flow.
A novel neural network architecture is proposed for processing irregular point trajectory data.
Experiments on MPI Sintel dataset show that our system produces significantly more accurate camera trajectories.
arXiv Detail & Related papers (2022-07-19T09:19:45Z) - Motion-from-Blur: 3D Shape and Motion Estimation of Motion-blurred
Objects in Videos [115.71874459429381]
We propose a method for jointly estimating the 3D motion, 3D shape, and appearance of highly motion-blurred objects from a video.
Experiments on benchmark datasets demonstrate that our method outperforms previous methods for fast moving object deblurring and 3D reconstruction.
arXiv Detail & Related papers (2021-11-29T11:25:14Z) - Consistent Depth of Moving Objects in Video [52.72092264848864]
We present a method to estimate depth of a dynamic scene, containing arbitrary moving objects, from an ordinary video captured with a moving camera.
We formulate this objective in a new test-time training framework where a depth-prediction CNN is trained in tandem with an auxiliary scene-flow prediction over the entire input video.
We demonstrate accurate and temporally coherent results on a variety of challenging videos containing diverse moving objects (pets, people, cars) as well as camera motion.
arXiv Detail & Related papers (2021-08-02T20:53:18Z) - Shape from Blur: Recovering Textured 3D Shape and Motion of Fast Moving
Objects [115.71874459429381]
We address the novel task of jointly reconstructing the 3D shape, texture, and motion of an object from a single motion-blurred image.
While previous approaches address the deblurring problem only in the 2D image domain, our proposed rigorous modeling of all object properties in the 3D domain enables the correct description of arbitrary object motion.
arXiv Detail & Related papers (2021-06-16T13:18:08Z) - Drosophila-Inspired 3D Moving Object Detection Based on Point Clouds [22.850519892606716]
We have developed a motion detector based on the shallow visual neural pathway of Drosophila.
This detector is sensitive to the movement of objects and can well suppress background noise.
An improved 3D object detection network is then used to estimate the point clouds of each proposal and efficiently generates the 3D bounding boxes and the object categories.
arXiv Detail & Related papers (2020-05-06T10:04:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.