Odometry Without Correspondence from Inertially Constrained Ruled Surfaces
- URL: http://arxiv.org/abs/2512.00327v1
- Date: Sat, 29 Nov 2025 05:36:50 GMT
- Title: Odometry Without Correspondence from Inertially Constrained Ruled Surfaces
- Authors: Chenqi Zhu, Levi Burner, Yiannis Aloimonos,
- Abstract summary: Research presents a novel algorithm to reconstruct 3D scenes and visual odometry from ruled surfaces.<n>Inspired by event cameras' propensity for edge detection, this research presents a novel algorithm to reconstruct 3D scenes and visual odometry from these ruled surfaces.
- Score: 14.767550805977999
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visual odometry techniques typically rely on feature extraction from a sequence of images and subsequent computation of optical flow. This point-to-point correspondence between two consecutive frames can be costly to compute and suffers from varying accuracy, which affects the odometry estimate's quality. Attempts have been made to bypass the difficulties originating from the correspondence problem by adopting line features and fusing other sensors (event camera, IMU) to improve performance, many of which still heavily rely on correspondence. If the camera observes a straight line as it moves, the image of the line sweeps a smooth surface in image-space time. It is a ruled surface and analyzing its shape gives information about odometry. Further, its estimation requires only differentially computed updates from point-to-line associations. Inspired by event cameras' propensity for edge detection, this research presents a novel algorithm to reconstruct 3D scenes and visual odometry from these ruled surfaces. By constraining the surfaces with the inertia measurements from an onboard IMU sensor, the dimensionality of the solution space is greatly reduced.
Related papers
- Geometry OR Tracker: Universal Geometric Operating Room Tracking [61.399734016038614]
In operating rooms (OR), world-scale multi-view 3D tracking supports downstream applications such as surgeon behavior recognition.<n>Camera calibration and RGB-D registration are always unreliable, leading to cross-view geometric inconsistency.<n>We introduce Geometry OR Tracker, a two-stage pipeline that rectifies imprecise calibration into a scaleconsistent and geometrically consistent camera setup.
arXiv Detail & Related papers (2026-02-28T09:21:21Z) - VA-GS: Enhancing the Geometric Representation of Gaussian Splatting via View Alignment [48.147381011235446]
3D Gaussian Splatting has recently emerged as an efficient solution for real-time novel view synthesis.<n>We propose a novel method that enhances the geometric representation of 3D Gaussians through view alignment.<n>Our method achieves state-of-the-art performance in both surface reconstruction and novel view synthesis.
arXiv Detail & Related papers (2025-10-13T14:44:50Z) - Visual Odometry with Transformers [68.453547770334]
We introduce Visual odometry Transformer (VoT), which processes sequences of monocular frames by extracting features.<n>Unlike prior methods, VoT directly predicts camera motion without estimating dense geometry and relies solely on camera poses for supervision.<n>VoT scales effectively with larger datasets, benefits substantially from stronger pre-trained backbones, generalizes across diverse camera motions and calibration settings, and outperforms traditional methods while running more than 3 times faster.
arXiv Detail & Related papers (2025-10-02T17:00:14Z) - Pseudo Depth Meets Gaussian: A Feed-forward RGB SLAM Baseline [64.42938561167402]
We propose an online 3D reconstruction method using 3D Gaussian-based SLAM, combined with a feed-forward recurrent prediction module.<n>This approach replaces slow test-time optimization with fast network inference, significantly improving tracking speed.<n>Our method achieves performance on par with the state-of-the-art SplaTAM, while reducing tracking time by more than 90%.
arXiv Detail & Related papers (2025-08-06T16:16:58Z) - Using a Distance Sensor to Detect Deviations in a Planar Surface [20.15053198469424]
We investigate methods for determining if a planar surface contains geometric deviations using only an instantaneous measurement from a miniature optical time-of-flight sensor.
Key to our method is to utilize the entirety of information encoded in raw time-of-flight data captured by off-the-shelf distance sensors.
We build an example application in which our method enables mobile robot obstacle avoidance over a wide field-of-view.
arXiv Detail & Related papers (2024-08-07T15:24:25Z) - Inertial Guided Uncertainty Estimation of Feature Correspondence in
Visual-Inertial Odometry/SLAM [8.136426395547893]
We propose a method to estimate the uncertainty of feature correspondence using an inertial guidance.
We also demonstrate the feasibility of our approach by incorporating it into one of recent visual-inertial odometry/SLAM algorithms.
arXiv Detail & Related papers (2023-11-07T04:56:29Z) - Real-Time Simultaneous Localization and Mapping with LiDAR intensity [9.374695605941627]
We propose a novel real-time LiDAR intensity image-based simultaneous localization and mapping method.
Our method can run in real time with high accuracy and works well with illumination changes, low-texture, and unstructured environments.
arXiv Detail & Related papers (2023-01-23T03:59:48Z) - Differentiable Uncalibrated Imaging [25.67247660827913]
We propose a differentiable imaging framework to address uncertainty in measurement coordinates such as sensor locations and projection angles.
We apply implicit neural networks, also known as neural fields, which are naturally differentiable with respect to the input coordinates.
Differentiability is key as it allows us to jointly fit a measurement representation, optimize over the uncertain measurement coordinates, and perform image reconstruction which in turn ensures consistent calibration.
arXiv Detail & Related papers (2022-11-18T22:48:09Z) - PVSeRF: Joint Pixel-, Voxel- and Surface-Aligned Radiance Field for
Single-Image Novel View Synthesis [52.546998369121354]
We present PVSeRF, a learning framework that reconstructs neural radiance fields from single-view RGB images.
We propose to incorporate explicit geometry reasoning and combine it with pixel-aligned features for radiance field prediction.
We show that the introduction of such geometry-aware features helps to achieve a better disentanglement between appearance and geometry.
arXiv Detail & Related papers (2022-02-10T07:39:47Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z) - Pix2Surf: Learning Parametric 3D Surface Models of Objects from Images [64.53227129573293]
We investigate the problem of learning to generate 3D parametric surface representations for novel object instances, as seen from one or more views.
We design neural networks capable of generating high-quality parametric 3D surfaces which are consistent between views.
Our method is supervised and trained on a public dataset of shapes from common object categories.
arXiv Detail & Related papers (2020-08-18T06:33:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.