Deformation Monitoring of Tunnel using Phase-based Motion Magnification
and Optical Flow
- URL: http://arxiv.org/abs/2310.07076v1
- Date: Thu, 15 Jun 2023 19:39:47 GMT
- Title: Deformation Monitoring of Tunnel using Phase-based Motion Magnification
and Optical Flow
- Authors: Kecheng Chen, Hiroshi Kogi and Kenichi Soga
- Abstract summary: This study combines PMM and OF to quantify the underground tunnel scene's magnified deformation mode pixel displacements.
With GPU acceleration, a dense OF algorithm computing each pixel's displacement is adopted to derive the whole scene motion.
- Score: 0.36832029288386137
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: During construction, continuous monitoring of underground tunnels can
mitigate potential hazards and facilitate an in-depth understanding of the
ground-tunnel interaction behavior. Traditional vision-based monitoring can
directly capture an extensive range of motion but cannot separate the tunnel's
vibration and deformation mode. Phase-based motion magnification is one of the
techniques to magnify the motion in target frequency bands and identify system
dynamics. Optical flow is a popular method of calculating the motion of image
intensities in computer vision and has a much lower computational cost than
Digital Image Correlation. This study combines PMM and OF to quantify the
underground tunnel scene's magnified deformation mode pixel displacements. As
motion magnification artifacts may lead to inaccurate quantification, the 2D
Wiener filter is used to smooth the high-frequency content. With GPU
acceleration, a dense OF algorithm computing each pixel's displacement is
adopted to derive the whole scene motion. A validation experiment is conducted
between the amplification motion and the actual motion of prisms preinstalled
in the tunnel.
Related papers
- Highly efficient non-rigid registration in k-space with application to cardiac Magnetic Resonance Imaging [10.618048010632728]
We propose a novel self-supervised deep learning-based framework, dubbed the Local-All Pass Attention Network (LAPANet) for non-rigid motion estimation.
LAPANet was evaluated on cardiac motion estimation across various sampling trajectories and acceleration rates.
The achieved high temporal resolution (less than 5 ms) for non-rigid motion opens new avenues for motion detection, tracking and correction in dynamic and real-time MRI applications.
arXiv Detail & Related papers (2024-10-24T15:19:59Z) - Motion-adaptive Separable Collaborative Filters for Blind Motion Deblurring [71.60457491155451]
Eliminating image blur produced by various kinds of motion has been a challenging problem.
We propose a novel real-world deblurring filtering model called the Motion-adaptive Separable Collaborative Filter.
Our method provides an effective solution for real-world motion blur removal and achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-04-19T19:44:24Z) - Motion-Aware Video Frame Interpolation [49.49668436390514]
We introduce a Motion-Aware Video Frame Interpolation (MA-VFI) network, which directly estimates intermediate optical flow from consecutive frames.
It not only extracts global semantic relationships and spatial details from input frames with different receptive fields, but also effectively reduces the required computational cost and complexity.
arXiv Detail & Related papers (2024-02-05T11:00:14Z) - Generative Image Dynamics [80.70729090482575]
We present an approach to modeling an image-space prior on scene motion.
Our prior is learned from a collection of motion trajectories extracted from real video sequences.
arXiv Detail & Related papers (2023-09-14T17:54:01Z) - Retrieving space-dependent polarization transformations via near-optimal
quantum process tomography [55.41644538483948]
We investigate the application of genetic and machine learning approaches to tomographic problems.
We find that the neural network-based scheme provides a significant speed-up, that may be critical in applications requiring a characterization in real-time.
We expect these results to lay the groundwork for the optimization of tomographic approaches in more general quantum processes.
arXiv Detail & Related papers (2022-10-27T11:37:14Z) - ParticleSfM: Exploiting Dense Point Trajectories for Localizing Moving
Cameras in the Wild [57.37891682117178]
We present a robust dense indirect structure-from-motion method for videos that is based on dense correspondence from pairwise optical flow.
A novel neural network architecture is proposed for processing irregular point trajectory data.
Experiments on MPI Sintel dataset show that our system produces significantly more accurate camera trajectories.
arXiv Detail & Related papers (2022-07-19T09:19:45Z) - Robust Dual-Graph Regularized Moving Object Detection [11.487964611698933]
Moving object detection and its associated background-foreground separation have been widely used in a lot of applications.
We propose a robust dual-graph regularized moving object detection model based on the weighted nuclear norm regularization.
arXiv Detail & Related papers (2022-04-25T19:40:01Z) - Which K-Space Sampling Schemes is good for Motion Artifact Detection in
Magnetic Resonance Imaging? [0.0]
Motion artifacts are a common occurrence in the Magnetic Resonance Imaging (MRI) exam.
In this study we investigate the effect of three conventional k-space samplers, including Cartesian, Uniform Spiral and Radial on motion induced image distortion.
arXiv Detail & Related papers (2021-03-15T16:38:40Z) - Estimating Nonplanar Flow from 2D Motion-blurred Widefield Microscopy
Images via Deep Learning [7.6146285961466]
We present a method to predict, from a single textured wide-field microscopy image, the movement of out-of-plane particles using the local characteristics of the motion blur.
This method could enable microscopists to gain insights about the dynamic properties of samples without the need for high-speed cameras or high-intensity light exposure.
arXiv Detail & Related papers (2021-02-14T19:44:28Z) - Event-based Motion Segmentation with Spatio-Temporal Graph Cuts [51.17064599766138]
We have developed a method to identify independently objects acquired with an event-based camera.
The method performs on par or better than the state of the art without having to predetermine the number of expected moving objects.
arXiv Detail & Related papers (2020-12-16T04:06:02Z) - Terahertz Pulse Shaping Using Diffractive Surfaces [6.895625925414448]
We present a diffractive network, which is used to shape an arbitrary broadband pulse into a desired optical waveform.
Results constitute the first demonstration of direct pulse shaping in terahertz spectrum.
This learning-based diffractive pulse engineering framework can find broad applications in e.g., communications, ultra-fast imaging and spectroscopy.
arXiv Detail & Related papers (2020-06-30T08:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.