Dense Pixel-wise Micro-motion Estimation of Object Surface by using Low
Dimensional Embedding of Laser Speckle Pattern
- URL: http://arxiv.org/abs/2011.00174v1
- Date: Sat, 31 Oct 2020 03:03:00 GMT
- Title: Dense Pixel-wise Micro-motion Estimation of Object Surface by using Low
Dimensional Embedding of Laser Speckle Pattern
- Authors: Ryusuke Sagawa, Yusuke Higuchi, Hiroshi Kawasaki, Ryo Furukawa,
Takahiro Ito
- Abstract summary: This paper proposes a method of estimating micro-motion of an object at each pixel that is too small to detect under a common setup of camera and illumination.
The approach is based on speckle pattern, which is produced by the mutual interference of laser light on object's surface and continuously changes its appearance according to the out-of-plane motion of the surface.
To compensate such micro- and large motion, the method estimates the motion parameters up to scale at each pixel by nonlinear embedding of the speckle pattern into low-dimensional space.
- Score: 4.713575447740915
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a method of estimating micro-motion of an object at each
pixel that is too small to detect under a common setup of camera and
illumination. The method introduces an active-lighting approach to make the
motion visually detectable. The approach is based on speckle pattern, which is
produced by the mutual interference of laser light on object's surface and
continuously changes its appearance according to the out-of-plane motion of the
surface. In addition, speckle pattern becomes uncorrelated with large motion.
To compensate such micro- and large motion, the method estimates the motion
parameters up to scale at each pixel by nonlinear embedding of the speckle
pattern into low-dimensional space. The out-of-plane motion is calculated by
making the motion parameters spatially consistent across the image. In the
experiments, the proposed method is compared with other measuring devices to
prove the effectiveness of the method.
Related papers
- Motion-adaptive Separable Collaborative Filters for Blind Motion Deblurring [71.60457491155451]
Eliminating image blur produced by various kinds of motion has been a challenging problem.
We propose a novel real-world deblurring filtering model called the Motion-adaptive Separable Collaborative Filter.
Our method provides an effective solution for real-world motion blur removal and achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-04-19T19:44:24Z) - Single-image camera calibration with model-free distortion correction [0.0]
This paper proposes a method for estimating the complete set of calibration parameters from a single image of a planar speckle pattern covering the entire sensor.
The correspondence between image points and physical points on the calibration target is obtained using Digital Image Correlation.
At the end of the procedure, a dense and uniform model-free distortion map is obtained over the entire image.
arXiv Detail & Related papers (2024-03-02T16:51:35Z) - Motion-induced error reduction for high-speed dynamic digital fringe
projection system [1.506359725738692]
In phase-shifting profilometry, any motion during the acquisition of fringe patterns can introduce errors.
We propose a method to pixel-wise reduce the errors when the measurement system is in motion due to a motorized linear stage.
arXiv Detail & Related papers (2024-01-29T07:57:43Z) - Virtual Inverse Perspective Mapping for Simultaneous Pose and Motion
Estimation [5.199765487172328]
We propose an automatic method for pose and motion estimation against a ground surface for a ground-moving robot-mounted monocular camera.
The framework adopts a semi-dense approach that benefits from both a feature-based method and an image-registration-based method.
arXiv Detail & Related papers (2023-03-09T11:45:00Z) - Data-Driven Stochastic Motion Evaluation and Optimization with Image by
Spatially-Aligned Temporal Encoding [8.104557130048407]
This paper proposes a probabilistic motion prediction for long motions. The motion is predicted so that it accomplishes a task from the initial state observed in the given image.
Our method seamlessly integrates the image and motion data into the image feature domain by spatially-aligned temporal encoding.
The effectiveness of the proposed method is demonstrated with a variety of experiments with similar SOTA methods.
arXiv Detail & Related papers (2023-02-10T04:06:00Z) - Retrieving space-dependent polarization transformations via near-optimal
quantum process tomography [55.41644538483948]
We investigate the application of genetic and machine learning approaches to tomographic problems.
We find that the neural network-based scheme provides a significant speed-up, that may be critical in applications requiring a characterization in real-time.
We expect these results to lay the groundwork for the optimization of tomographic approaches in more general quantum processes.
arXiv Detail & Related papers (2022-10-27T11:37:14Z) - ParticleSfM: Exploiting Dense Point Trajectories for Localizing Moving
Cameras in the Wild [57.37891682117178]
We present a robust dense indirect structure-from-motion method for videos that is based on dense correspondence from pairwise optical flow.
A novel neural network architecture is proposed for processing irregular point trajectory data.
Experiments on MPI Sintel dataset show that our system produces significantly more accurate camera trajectories.
arXiv Detail & Related papers (2022-07-19T09:19:45Z) - MBA-VO: Motion Blur Aware Visual Odometry [99.56896875807635]
Motion blur is one of the major challenges remaining for visual odometry methods.
In low-light conditions where longer exposure times are necessary, motion blur can appear even for relatively slow camera motions.
We present a novel hybrid visual odometry pipeline with direct approach that explicitly models and estimates the camera's local trajectory within the exposure time.
arXiv Detail & Related papers (2021-03-25T09:02:56Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z) - Estimating Nonplanar Flow from 2D Motion-blurred Widefield Microscopy
Images via Deep Learning [7.6146285961466]
We present a method to predict, from a single textured wide-field microscopy image, the movement of out-of-plane particles using the local characteristics of the motion blur.
This method could enable microscopists to gain insights about the dynamic properties of samples without the need for high-speed cameras or high-intensity light exposure.
arXiv Detail & Related papers (2021-02-14T19:44:28Z) - Event-based Motion Segmentation with Spatio-Temporal Graph Cuts [51.17064599766138]
We have developed a method to identify independently objects acquired with an event-based camera.
The method performs on par or better than the state of the art without having to predetermine the number of expected moving objects.
arXiv Detail & Related papers (2020-12-16T04:06:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.