3D Trajectory Reconstruction of Moving Points Based on a Monocular Camera
- URL: http://arxiv.org/abs/2502.19689v1
- Date: Thu, 27 Feb 2025 02:01:40 GMT
- Title: 3D Trajectory Reconstruction of Moving Points Based on a Monocular Camera
- Authors: Huayu Huang, Banglei Guan, Yang Shang, Qifeng Yu,
- Abstract summary: Reconstructing a point's 3D motion just from the images captured by only a monocular camera is unfeasible without prior assumptions.<n>This paper presents an algorithm for reconstructing 3D trajectories of moving points using a monocular camera.
- Score: 6.9017898687323775
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The motion measurement of point targets constitutes a fundamental problem in photogrammetry, with extensive applications across various engineering domains. Reconstructing a point's 3D motion just from the images captured by only a monocular camera is unfeasible without prior assumptions. Under limited observation conditions such as insufficient observations, long distance, and high observation error of platform, the least squares estimation faces the issue of ill-conditioning. This paper presents an algorithm for reconstructing 3D trajectories of moving points using a monocular camera. The motion of the points is represented through temporal polynomials. Ridge estimation is introduced to mitigate the issues of ill-conditioning caused by limited observation conditions. Then, an automatic algorithm for determining the order of the temporal polynomials is proposed. Furthermore, the definition of reconstructability for temporal polynomials is proposed to describe the reconstruction accuracy quantitatively. The simulated and real-world experimental results demonstrate the feasibility, accuracy, and efficiency of the proposed method.
Related papers
- HORT: Monocular Hand-held Objects Reconstruction with Transformers [61.36376511119355]
Reconstructing hand-held objects in 3D from monocular images is a significant challenge in computer vision.
We propose a transformer-based model to efficiently reconstruct dense 3D point clouds of hand-held objects.
Our method achieves state-of-the-art accuracy with much faster inference speed, while generalizing well to in-the-wild images.
arXiv Detail & Related papers (2025-03-27T09:45:09Z) - EMoTive: Event-guided Trajectory Modeling for 3D Motion Estimation [59.33052312107478]
Event cameras offer possibilities for 3D motion estimation through continuous adaptive pixel-level responses to scene changes.
This paper presents EMove, a novel event-based framework that models-uniform trajectories via event-guided parametric curves.
For motion representation, we introduce a density-aware adaptation mechanism to fuse spatial and temporal features under event guidance.
The final 3D motion estimation is achieved through multi-temporal sampling of parametric trajectories, flows and depth motion fields.
arXiv Detail & Related papers (2025-03-14T13:15:54Z) - UNOPose: Unseen Object Pose Estimation with an Unposed RGB-D Reference Image [86.7128543480229]
We present a novel approach and benchmark, termed UNOPose, for unseen one-reference-based object pose estimation.
Building upon a coarse-to-fine paradigm, UNOPose constructs an SE(3)-invariant reference frame to standardize object representation.
We recalibrate the weight of each correspondence based on its predicted likelihood of being within the overlapping region.
arXiv Detail & Related papers (2024-11-25T05:36:00Z) - iComMa: Inverting 3D Gaussian Splatting for Camera Pose Estimation via Comparing and Matching [14.737266480464156]
We present a method named iComMa to address the 6D camera pose estimation problem in computer vision.
We propose an efficient method for accurate camera pose estimation by inverting 3D Gaussian Splatting (3DGS)
arXiv Detail & Related papers (2023-12-14T15:31:33Z) - MELON: NeRF with Unposed Images in SO(3) [35.093700416540436]
We show that a neural network can reconstruct a neural radiance field from unposed images with state-of-the-art accuracy while requiring ten times fewer views than adversarial approaches.
Using a neural-network to regularize pose estimation, we demonstrate that our method can reconstruct a neural radiance field from unposed images with state-of-the-art accuracy while requiring ten times fewer views than adversarial approaches.
arXiv Detail & Related papers (2023-03-14T17:33:39Z) - RNNPose: Recurrent 6-DoF Object Pose Refinement with Robust
Correspondence Field Estimation and Pose Optimization [46.144194562841435]
We propose a framework based on a recurrent neural network (RNN) for object pose refinement.
The problem is formulated as a non-linear least squares problem based on the estimated correspondence field.
The correspondence field estimation and pose refinement are conducted alternatively in each iteration to recover accurate object poses.
arXiv Detail & Related papers (2022-03-24T06:24:55Z) - Neural Computed Tomography [1.7188280334580197]
Motion during acquisition of a set of projections can lead to significant motion artifacts in computed tomography reconstructions.
We propose a novel reconstruction framework, NeuralCT, to generate time-resolved images free from motion artifacts.
arXiv Detail & Related papers (2022-01-17T18:50:58Z) - Estimating Egocentric 3D Human Pose in Global Space [70.7272154474722]
We present a new method for egocentric global 3D body pose estimation using a single-mounted fisheye camera.
Our approach outperforms state-of-the-art methods both quantitatively and qualitatively.
arXiv Detail & Related papers (2021-04-27T20:01:57Z) - Geometry-based Distance Decomposition for Monocular 3D Object Detection [48.63934632884799]
We propose a novel geometry-based distance decomposition to recover the distance by its factors.
The decomposition factors the distance of objects into the most representative and stable variables.
Our method directly predicts 3D bounding boxes from RGB images with a compact architecture.
arXiv Detail & Related papers (2021-04-08T13:57:30Z) - MonoRUn: Monocular 3D Object Detection by Reconstruction and Uncertainty
Propagation [4.202461384355329]
We propose MonoRUn, a novel 3D object detection framework that learns dense correspondences and geometry in a self-supervised manner.
Our proposed approach outperforms current state-of-the-art methods on KITTI benchmark.
arXiv Detail & Related papers (2021-03-23T15:03:08Z) - Calibrated and Partially Calibrated Semi-Generalized Homographies [65.29477277713205]
We propose the first minimal solutions for estimating the semi-generalized homography given a perspective and a generalized camera.
The proposed solvers are stable and efficient as demonstrated by a number of synthetic and real-world experiments.
arXiv Detail & Related papers (2021-03-11T08:56:24Z) - Single View Metrology in the Wild [94.7005246862618]
We present a novel approach to single view metrology that can recover the absolute scale of a scene represented by 3D heights of objects or camera height above the ground.
Our method relies on data-driven priors learned by a deep network specifically designed to imbibe weakly supervised constraints from the interplay of the unknown camera with 3D entities such as object heights.
We demonstrate state-of-the-art qualitative and quantitative results on several datasets as well as applications including virtual object insertion.
arXiv Detail & Related papers (2020-07-18T22:31:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.