Image Moment Invariants to Rotational Motion Blur
- URL: http://arxiv.org/abs/2303.14566v1
- Date: Sat, 25 Mar 2023 21:23:42 GMT
- Title: Image Moment Invariants to Rotational Motion Blur
- Authors: Hanlin Mo, Hongxiang Hao, Guoying Zhao
- Abstract summary: This paper proposes a novel method to generate image moment invariants under general rotational motion blur.
To the best of our knowledge, this is the first time that moment invariants for rotational motion blur have been proposed in the literature.
Our results show that the moment invariants proposed in this paper significantly outperform other features in various tasks.
- Score: 47.76259900246576
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rotational motion blur caused by the circular motion of the camera or/and
object is common in life. Identifying objects from images affected by
rotational motion blur is challenging because this image degradation severely
impacts image quality. Therefore, it is meaningful to develop image invariant
features under rotational motion blur and then use them in practical tasks,
such as object classification and template matching. This paper proposes a
novel method to generate image moment invariants under general rotational
motion blur and provides some instances. Further, we achieve their invariance
to similarity transform. To the best of our knowledge, this is the first time
that moment invariants for rotational motion blur have been proposed in the
literature. We conduct extensive experiments on various image datasets
disturbed by similarity transform and rotational motion blur to test these
invariants' numerical stability and robustness to image noise. We also
demonstrate their performance in image classification and handwritten digit
recognition. Current state-of-the-art blur moment invariants and deep neural
networks are chosen for comparison. Our results show that the moment invariants
proposed in this paper significantly outperform other features in various
tasks.
Related papers
- Motion Blur Decomposition with Cross-shutter Guidance [33.72961622720793]
Motion blur is an artifact under insufficient illumination where exposure time has to be prolonged so as to collect more photons for a bright enough image.
Recent researches have aimed at decomposing a blurry image into multiple sharp images with spatial and temporal coherence.
We propose to utilize the ordered scanline-wise delay in a rolling shutter image to robustify motion decomposition of a single blurry image.
arXiv Detail & Related papers (2024-04-01T13:55:40Z) - FRED: Towards a Full Rotation-Equivariance in Aerial Image Object
Detection [28.47314201641291]
We introduce a Fully Rotation-Equivariant Oriented Object Detector (FRED)
Our proposed method delivers comparable performance on DOTA-v1.0 and outperforms by 1.5 mAP on DOTA-v1.5, all while significantly reducing the model parameters to 16%.
arXiv Detail & Related papers (2023-12-22T09:31:43Z) - Motion-from-Blur: 3D Shape and Motion Estimation of Motion-blurred
Objects in Videos [115.71874459429381]
We propose a method for jointly estimating the 3D motion, 3D shape, and appearance of highly motion-blurred objects from a video.
Experiments on benchmark datasets demonstrate that our method outperforms previous methods for fast moving object deblurring and 3D reconstruction.
arXiv Detail & Related papers (2021-11-29T11:25:14Z) - Multi-View Motion Synthesis via Applying Rotated Dual-Pixel Blur Kernels [48.063176079878055]
One of the primary effects applied to images captured in portrait mode is a synthetic shallow depth of field (DoF)
In this work, we follow the trend of rendering the NIMAT effect by introducing a modification on the blur synthesis procedure in portrait mode.
Our modification enables a high-quality synthesis of multi-view bokeh from a single image by applying rotated blurring kernels.
arXiv Detail & Related papers (2021-11-15T15:23:55Z) - Affine-modeled video extraction from a single motion blurred image [3.0080996413230667]
A motion-blurred image is the temporal average of multiple sharp frames over the exposure time.
In this work, we report a generalized video extraction method using the affine motion modeling.
Experiments on both public datasets and real captured data validate the state-of-the-art performance of the reported technique.
arXiv Detail & Related papers (2021-04-08T13:59:14Z) - Geometric Moment Invariants to Motion Blur [4.8915390363596005]
We focus on removing interference of motion blur by the derivation of motion blur invariants.
Based on geometric moment and mathematical model of motion blur, we prove that geometric moments of blurred image and original image are linearly related.
Surprisingly, we find some geometric moment invariants are invariants to not only spatial transform but also motion blur.
arXiv Detail & Related papers (2021-01-21T14:50:34Z) - Event-based Motion Segmentation with Spatio-Temporal Graph Cuts [51.17064599766138]
We have developed a method to identify independently objects acquired with an event-based camera.
The method performs on par or better than the state of the art without having to predetermine the number of expected moving objects.
arXiv Detail & Related papers (2020-12-16T04:06:02Z) - Exposure Trajectory Recovery from Motion Blur [90.75092808213371]
Motion blur in dynamic scenes is an important yet challenging research topic.
In this paper, we define exposure trajectories, which represent the motion information contained in a blurry image.
A novel motion offset estimation framework is proposed to model pixel-wise displacements of the latent sharp image.
arXiv Detail & Related papers (2020-10-06T05:23:33Z) - Self-Supervised Linear Motion Deblurring [112.75317069916579]
Deep convolutional neural networks are state-of-the-art for image deblurring.
We present a differentiable reblur model for self-supervised motion deblurring.
Our experiments demonstrate that self-supervised single image deblurring is really feasible.
arXiv Detail & Related papers (2020-02-10T20:15:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.