Detection of moving objects through turbulent media. Decomposition of Oscillatory vs Non-Oscillatory spatio-temporal vector fields
- URL: http://arxiv.org/abs/2410.21551v1
- Date: Mon, 28 Oct 2024 21:29:56 GMT
- Title: Detection of moving objects through turbulent media. Decomposition of Oscillatory vs Non-Oscillatory spatio-temporal vector fields
- Authors: Jerome Gilles, Francis Alvarez, Nicholas B. Ferrante, Margaret Fortman, Lena Tahir, Alex Tarter, Anneke von Seeger,
- Abstract summary: In this paper, we investigate how moving objects can be detected when impacted by atmospheric turbulence.
To perform this task, we propose an of 2D cartoon vector+ decomposition algorithms to 3D textures.
- Score: 0.0
- License:
- Abstract: In this paper, we investigate how moving objects can be detected when images are impacted by atmospheric turbulence. We present a geometric spatio-temporal point of view to the problem and show that it is possible to distinguish movement due to the turbulence vs. moving objects. To perform this task, we propose an extension of 2D cartoon+texture decomposition algorithms to 3D vector fields. Our algorithm is based on curvelet spaces which permit to better characterize the movement flow geometry. We present experiments on real data which illustrate the efficiency of the proposed method.
Related papers
- Investigation of moving objects through atmospheric turbulence from a non-stationary platform [0.5735035463793008]
In this work, we extract the optical flow field corresponding to moving objects from an image sequence captured from a moving camera.
Our procedure first computes the optical flow field and creates a motion model to compensate for the flow field induced by camera motion.
All of the sequences and code used in this work are open source and are available by contacting the authors.
arXiv Detail & Related papers (2024-10-29T00:54:28Z) - JSTR: Joint Spatio-Temporal Reasoning for Event-based Moving Object
Detection [17.3397709143323]
Event-based moving object detection is a challenging task, where static background and moving object are mixed together.
We propose a novel joint-temporal reasoning method for event-based moving object detection.
arXiv Detail & Related papers (2024-03-12T09:22:52Z) - Invariant-based Mapping of Space During General Motion of an Observer [0.0]
This paper explores visual motion-based invariants, resulting in a new instantaneous domain.
We make use of nonlinear functions derived from measurable optical flow, which are linked to geometric 3D invariants.
We present simulations involving a camera that translates and rotates relative to a 3D object, capturing snapshots of the camera projected images.
arXiv Detail & Related papers (2023-11-18T17:40:35Z) - Detecting Moving Objects Using a Novel Optical-Flow-Based
Range-Independent Invariant [0.0]
We present an optical-flow-based transformation that yields a consistent 2D invariant image output regardless of time instants, range of points in 3D, and the speed of the camera.
In the new domain, projections of 3D points that deviate from the values of the predefined lookup image can be clearly identified as moving relative to the stationary 3D environment.
arXiv Detail & Related papers (2023-10-14T17:42:19Z) - 3D Motion Magnification: Visualizing Subtle Motions with Time Varying
Radiance Fields [58.6780687018956]
We present a 3D motion magnification method that can magnify subtle motions from scenes captured by a moving camera.
We represent the scene with time-varying radiance fields and leverage the Eulerian principle for motion magnification.
We evaluate the effectiveness of our method on both synthetic and real-world scenes captured under various camera setups.
arXiv Detail & Related papers (2023-08-07T17:59:59Z) - Neural Motion Fields: Encoding Grasp Trajectories as Implicit Value
Functions [65.84090965167535]
We present Neural Motion Fields, a novel object representation which encodes both object point clouds and the relative task trajectories as an implicit value function parameterized by a neural network.
This object-centric representation models a continuous distribution over the SE(3) space and allows us to perform grasping reactively by leveraging sampling-based MPC to optimize this value function.
arXiv Detail & Related papers (2022-06-29T18:47:05Z) - Motion-from-Blur: 3D Shape and Motion Estimation of Motion-blurred
Objects in Videos [115.71874459429381]
We propose a method for jointly estimating the 3D motion, 3D shape, and appearance of highly motion-blurred objects from a video.
Experiments on benchmark datasets demonstrate that our method outperforms previous methods for fast moving object deblurring and 3D reconstruction.
arXiv Detail & Related papers (2021-11-29T11:25:14Z) - Shape from Blur: Recovering Textured 3D Shape and Motion of Fast Moving
Objects [115.71874459429381]
We address the novel task of jointly reconstructing the 3D shape, texture, and motion of an object from a single motion-blurred image.
While previous approaches address the deblurring problem only in the 2D image domain, our proposed rigorous modeling of all object properties in the 3D domain enables the correct description of arbitrary object motion.
arXiv Detail & Related papers (2021-06-16T13:18:08Z) - FMODetect: Robust Detection and Trajectory Estimation of Fast Moving
Objects [110.29738581961955]
We propose the first learning-based approach for detection and trajectory estimation of fast moving objects.
The proposed method first detects all fast moving objects as a truncated distance function to the trajectory.
For the sharp appearance estimation, we propose an energy minimization based deblurring.
arXiv Detail & Related papers (2020-12-15T11:05:34Z) - Neural Topological SLAM for Visual Navigation [112.73876869904]
We design topological representations for space that leverage semantics and afford approximate geometric reasoning.
We describe supervised learning-based algorithms that can build, maintain and use such representations under noisy actuation.
arXiv Detail & Related papers (2020-05-25T17:56:29Z) - Drosophila-Inspired 3D Moving Object Detection Based on Point Clouds [22.850519892606716]
We have developed a motion detector based on the shallow visual neural pathway of Drosophila.
This detector is sensitive to the movement of objects and can well suppress background noise.
An improved 3D object detection network is then used to estimate the point clouds of each proposal and efficiently generates the 3D bounding boxes and the object categories.
arXiv Detail & Related papers (2020-05-06T10:04:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.