Estimating Scene Flow in Robot Surroundings with Distributed Miniaturized Time-of-Flight Sensors
- URL: http://arxiv.org/abs/2504.02439v1
- Date: Thu, 03 Apr 2025 09:57:51 GMT
- Title: Estimating Scene Flow in Robot Surroundings with Distributed Miniaturized Time-of-Flight Sensors
- Authors: Jack Sander, Giammarco Caroleo, Alessandro Albini, Perla Maiolino,
- Abstract summary: We present an approach for scene flow estimation from low-density and noisy point clouds acquired from Time of Flight (ToF) sensors distributed on the robot body.<n>The proposed method clusters points from consecutive frames and applies Iterative Closest Point (ICP) to estimate a dense motion flow.<n>We employ a fitness-based classification to distinguish between stationary and moving points and an inlier removal strategy to refine geometric correspondences.
- Score: 41.45395153490076
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tracking motions of humans or objects in the surroundings of the robot is essential to improve safe robot motions and reactions. In this work, we present an approach for scene flow estimation from low-density and noisy point clouds acquired from miniaturized Time of Flight (ToF) sensors distributed on the robot body. The proposed method clusters points from consecutive frames and applies Iterative Closest Point (ICP) to estimate a dense motion flow, with additional steps introduced to mitigate the impact of sensor noise and low-density data points. Specifically, we employ a fitness-based classification to distinguish between stationary and moving points and an inlier removal strategy to refine geometric correspondences. The proposed approach is validated in an experimental setup where 24 ToF are used to estimate the velocity of an object moving at different controlled speeds. Experimental results show that the method consistently approximates the direction of the motion and its magnitude with an error which is in line with sensor noise.
Related papers
- Event-Based Tracking Any Point with Motion-Augmented Temporal Consistency [58.719310295870024]
This paper presents an event-based framework for tracking any point.<n>It tackles the challenges posed by spatial sparsity and motion sensitivity in events.<n>It achieves 150% faster processing with competitive model parameters.
arXiv Detail & Related papers (2024-12-02T09:13:29Z) - Solution for Point Tracking Task of ICCV 1st Perception Test Challenge 2023 [50.910598799408326]
The Tracking Any Point (TAP) task tracks any physical surface through a video.
Several existing approaches have explored the TAP by considering the temporal relationships to obtain smooth point motion trajectories.
We propose a simple yet effective approach called TAP with confident static points (TAPIR+), which focuses on rectifying the tracking of the static point in the videos shot by a static camera.
arXiv Detail & Related papers (2024-03-26T13:50:39Z) - Neural Implicit Swept Volume Models for Fast Collision Detection [0.0]
We present an algorithm combining the speed of the deep learning-based signed distance computations with the strong accuracy guarantees of geometric collision checkers.
We validate our approach in simulated and real-world robotic experiments, and demonstrate that it is able to speed up a commercial bin picking application.
arXiv Detail & Related papers (2024-02-23T12:06:48Z) - RMS: Redundancy-Minimizing Point Cloud Sampling for Real-Time Pose Estimation [13.163076804805732]
We propose a novel point cloud sampling method named RMS that minimizes redundancy within a 3D point cloud.
We integrate RMS into the point-based KISS-ICP and feature-based LOAM odometry pipelines.
Experiments demonstrate that RMS outperforms state-of-the-art methods in speed, compression, and accuracy in well-conditioned and geometrically-degenerated settings.
arXiv Detail & Related papers (2023-12-12T14:55:49Z) - DICP: Doppler Iterative Closest Point Algorithm [5.934931737701265]
We present a novel algorithm for point cloud registration for range sensors capable of measuring per-return instantaneous radial velocity: Doppler ICP.
We propose a new Doppler velocity objective function that exploits the compatibility of each point's Doppler measurement and the sensor's current motion estimate.
Our results show a significant performance improvement in terms of the registration accuracy with the added benefit of faster convergence guided by the Doppler velocity gradients.
arXiv Detail & Related papers (2022-01-28T05:51:07Z) - Nonprehensile Riemannian Motion Predictive Control [57.295751294224765]
We introduce a novel Real-to-Sim reward analysis technique to reliably imagine and predict the outcome of taking possible actions for a real robotic platform.
We produce a closed-loop controller to reactively push objects in a continuous action space.
We observe that RMPC is robust in cluttered as well as occluded environments and outperforms the baselines.
arXiv Detail & Related papers (2021-11-15T18:50:04Z) - Online Body Schema Adaptation through Cost-Sensitive Active Learning [63.84207660737483]
The work was implemented in a simulation environment, using the 7DoF arm of the iCub robot simulator.
A cost-sensitive active learning approach is used to select optimal joint configurations.
The results show cost-sensitive active learning has similar accuracy to the standard active learning approach, while reducing in about half the executed movement.
arXiv Detail & Related papers (2021-01-26T16:01:02Z) - FMODetect: Robust Detection and Trajectory Estimation of Fast Moving
Objects [110.29738581961955]
We propose the first learning-based approach for detection and trajectory estimation of fast moving objects.
The proposed method first detects all fast moving objects as a truncated distance function to the trajectory.
For the sharp appearance estimation, we propose an energy minimization based deblurring.
arXiv Detail & Related papers (2020-12-15T11:05:34Z) - Drosophila-Inspired 3D Moving Object Detection Based on Point Clouds [22.850519892606716]
We have developed a motion detector based on the shallow visual neural pathway of Drosophila.
This detector is sensitive to the movement of objects and can well suppress background noise.
An improved 3D object detection network is then used to estimate the point clouds of each proposal and efficiently generates the 3D bounding boxes and the object categories.
arXiv Detail & Related papers (2020-05-06T10:04:23Z) - Any Motion Detector: Learning Class-agnostic Scene Dynamics from a
Sequence of LiDAR Point Clouds [4.640835690336654]
We propose a novel real-time approach of temporal context aggregation for motion detection and motion parameters estimation.
We introduce an ego-motion compensation layer to achieve real-time inference with performance comparable to a naive odometric transform of the original point cloud sequence.
arXiv Detail & Related papers (2020-04-24T10:40:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.