Second-order Anisotropic Gaussian Directional Derivative Filters for
Blob Detection
- URL: http://arxiv.org/abs/2305.00435v1
- Date: Sun, 30 Apr 2023 09:32:16 GMT
- Title: Second-order Anisotropic Gaussian Directional Derivative Filters for
Blob Detection
- Authors: Jie Ren, Wenya Yu, Jiapan Guo, Weichuan Zhang, Changming Sun
- Abstract summary: Interest point detection methods have received increasing attention and are widely used in computer vision tasks such as image retrieval and 3D reconstruction.
In this work, second-order anisotropic Gaussian directional derivative filters with multiple scales are used to smooth the input image and a novel blob detection method is proposed.
- Score: 26.777330356523954
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Interest point detection methods have received increasing attention and are
widely used in computer vision tasks such as image retrieval and 3D
reconstruction. In this work, second-order anisotropic Gaussian directional
derivative filters with multiple scales are used to smooth the input image and
a novel blob detection method is proposed. Extensive experiments demonstrate
the superiority of our proposed method over state-of-the-art benchmarks in
terms of detection performance and robustness to affine transformations.
Related papers
- Binocular-Guided 3D Gaussian Splatting with View Consistency for Sparse View Synthesis [53.702118455883095]
We propose a novel method for synthesizing novel views from sparse views with Gaussian Splatting.
Our key idea lies in exploring the self-supervisions inherent in the binocular stereo consistency between each pair of binocular images.
Our method significantly outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2024-10-24T15:10:27Z) - UGAD: Universal Generative AI Detector utilizing Frequency Fingerprints [18.47018538990973]
Our study introduces a novel multi-modal approach to detect AI-generated images.
Our approach significantly enhances the accuracy of differentiating between real and AI-generated images.
arXiv Detail & Related papers (2024-09-12T10:29:37Z) - 2D-Malafide: Adversarial Attacks Against Face Deepfake Detection Systems [8.717726409183175]
We introduce 2D-Malafide, a novel and lightweight adversarial attack designed to deceive face deepfake detection systems.
Unlike traditional additive noise approaches, 2D-Malafide optimises a small number of filter coefficients to generate robust adversarial perturbations.
Experiments, conducted using the FaceForensics++ dataset, demonstrate that 2D-Malafide substantially degrades detection performance in both white-box and black-box settings.
arXiv Detail & Related papers (2024-08-26T09:41:40Z) - Diffusion-based 3D Object Detection with Random Boxes [58.43022365393569]
Existing anchor-based 3D detection methods rely on empiricals setting of anchors, which makes the algorithms lack elegance.
Our proposed Diff3Det migrates the diffusion model to proposal generation for 3D object detection by considering the detection boxes as generative targets.
In the inference stage, the model progressively refines a set of random boxes to the prediction results.
arXiv Detail & Related papers (2023-09-05T08:49:53Z) - Detecting Rotated Objects as Gaussian Distributions and Its 3-D
Generalization [81.29406957201458]
Existing detection methods commonly use a parameterized bounding box (BBox) to model and detect (horizontal) objects.
We argue that such a mechanism has fundamental limitations in building an effective regression loss for rotation detection.
We propose to model the rotated objects as Gaussian distributions.
We extend our approach from 2-D to 3-D with a tailored algorithm design to handle the heading estimation.
arXiv Detail & Related papers (2022-09-22T07:50:48Z) - Active Gaze Control for Foveal Scene Exploration [124.11737060344052]
We propose a methodology to emulate how humans and robots with foveal cameras would explore a scene.
The proposed method achieves an increase in detection F1-score of 2-3 percentage points for the same number of gaze shifts.
arXiv Detail & Related papers (2022-08-24T14:59:28Z) - Adversarial Domain Feature Adaptation for Bronchoscopic Depth Estimation [111.89519571205778]
In this work, we propose an alternative domain-adaptive approach to depth estimation.
Our novel two-step structure first trains a depth estimation network with labeled synthetic images in a supervised manner.
The results of our experiments show that the proposed method improves the network's performance on real images by a considerable margin.
arXiv Detail & Related papers (2021-09-24T08:11:34Z) - Unsupervised Change Detection in Hyperspectral Images using Feature
Fusion Deep Convolutional Autoencoders [15.978029004247617]
The proposed work aims to build a novel feature extraction system using a feature fusion deep convolutional autoencoder.
It is found that the proposed method clearly outperformed the state of the art methods in unsupervised change detection for all the datasets.
arXiv Detail & Related papers (2021-09-10T16:52:31Z) - Rotation Equivariant Feature Image Pyramid Network for Object Detection
in Optical Remote Sensing Imagery [39.25541709228373]
We propose the rotation equivariant feature image pyramid network (REFIPN), an image pyramid network based on rotation equivariance convolution.
The proposed pyramid network extracts features in a wide range of scales and orientations by using novel convolution filters.
The detection performance of the proposed model is validated on two commonly used aerial benchmarks.
arXiv Detail & Related papers (2021-06-02T01:33:49Z) - Depth image denoising using nuclear norm and learning graph model [107.51199787840066]
Group-based image restoration methods are more effective in gathering the similarity among patches.
For each patch, we find and group the most similar patches within a searching window.
The proposed method is superior to other current state-of-the-art denoising methods in both subjective and objective criterion.
arXiv Detail & Related papers (2020-08-09T15:12:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.