Geometric Moment Invariants to Motion Blur
- URL: http://arxiv.org/abs/2101.08647v2
- Date: Mon, 25 Jan 2021 02:35:03 GMT
- Title: Geometric Moment Invariants to Motion Blur
- Authors: Hongxiang Hao., Hanlin Mo., Hua Li
- Abstract summary: We focus on removing interference of motion blur by the derivation of motion blur invariants.
Based on geometric moment and mathematical model of motion blur, we prove that geometric moments of blurred image and original image are linearly related.
Surprisingly, we find some geometric moment invariants are invariants to not only spatial transform but also motion blur.
- Score: 4.8915390363596005
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we focus on removing interference of motion blur by the
derivation of motion blur invariants.Unlike earlier work, we don't restore any
blurred image. Based on geometric moment and mathematical model of motion blur,
we prove that geometric moments of blurred image and original image are
linearly related. Depending on this property, we can analyse whether an
existing moment-based feature is invariant to motion blur. Surprisingly, we
find some geometric moment invariants are invariants to not only spatial
transform but also motion blur. Meanwhile, we test invariance and robustness of
these invariants using synthetic and real blur image datasets. And the results
show these invariants outperform some widely used blur moment invariants and
non-moment image features in image retrieval, classification and template
matching.
Related papers
- Time-based Mapping of Space Using Visual Motion Invariants [0.0]
This paper focuses on visual motion-based invariants that result in a representation of 3D points in which the stationary environment remains invariant.
We refer to the resulting optical flow-based invariants as 'Time-Clearance' and the well-known 'Time-to-Contact'
We present simulations of a camera moving relative to a 3D object, snapshots of its projected images captured by a rectilinearly moving camera, and the object as it appears unchanged in the new domain over time.
arXiv Detail & Related papers (2023-10-14T17:55:49Z) - SceNeRFlow: Time-Consistent Reconstruction of General Dynamic Scenes [75.9110646062442]
We propose SceNeRFlow to reconstruct a general, non-rigid scene in a time-consistent manner.
Our method takes multi-view RGB videos and background images from static cameras with known camera parameters as input.
We show experimentally that, unlike prior work that only handles small motion, our method enables the reconstruction of studio-scale motions.
arXiv Detail & Related papers (2023-08-16T09:50:35Z) - Image Moment Invariants to Rotational Motion Blur [47.76259900246576]
This paper proposes a novel method to generate image moment invariants under general rotational motion blur.
To the best of our knowledge, this is the first time that moment invariants for rotational motion blur have been proposed in the literature.
Our results show that the moment invariants proposed in this paper significantly outperform other features in various tasks.
arXiv Detail & Related papers (2023-03-25T21:23:42Z) - Blur Invariants for Image Recognition [9.207644534257543]
Invariants with respect to blur offer an alternative way of adescription and recognition of blurred images without any deblurring.
In this paper, we present an original unified theory of blur invariants.
arXiv Detail & Related papers (2023-01-18T14:58:32Z) - Learning Transformations To Reduce the Geometric Shift in Object
Detection [60.20931827772482]
We tackle geometric shifts emerging from variations in the image capture process.
We introduce a self-training approach that learns a set of geometric transformations to minimize these shifts.
We evaluate our method on two different shifts, i.e., a camera's field of view (FoV) change and a viewpoint change.
arXiv Detail & Related papers (2023-01-13T11:55:30Z) - Motion-from-Blur: 3D Shape and Motion Estimation of Motion-blurred
Objects in Videos [115.71874459429381]
We propose a method for jointly estimating the 3D motion, 3D shape, and appearance of highly motion-blurred objects from a video.
Experiments on benchmark datasets demonstrate that our method outperforms previous methods for fast moving object deblurring and 3D reconstruction.
arXiv Detail & Related papers (2021-11-29T11:25:14Z) - Affine-modeled video extraction from a single motion blurred image [3.0080996413230667]
A motion-blurred image is the temporal average of multiple sharp frames over the exposure time.
In this work, we report a generalized video extraction method using the affine motion modeling.
Experiments on both public datasets and real captured data validate the state-of-the-art performance of the reported technique.
arXiv Detail & Related papers (2021-04-08T13:59:14Z) - MBA-VO: Motion Blur Aware Visual Odometry [99.56896875807635]
Motion blur is one of the major challenges remaining for visual odometry methods.
In low-light conditions where longer exposure times are necessary, motion blur can appear even for relatively slow camera motions.
We present a novel hybrid visual odometry pipeline with direct approach that explicitly models and estimates the camera's local trajectory within the exposure time.
arXiv Detail & Related papers (2021-03-25T09:02:56Z) - Event-based Motion Segmentation with Spatio-Temporal Graph Cuts [51.17064599766138]
We have developed a method to identify independently objects acquired with an event-based camera.
The method performs on par or better than the state of the art without having to predetermine the number of expected moving objects.
arXiv Detail & Related papers (2020-12-16T04:06:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.