View Consistent Purification for Accurate Cross-View Localization
- URL: http://arxiv.org/abs/2308.08110v1
- Date: Wed, 16 Aug 2023 02:51:52 GMT
- Title: View Consistent Purification for Accurate Cross-View Localization
- Authors: Shan Wang, Yanhao Zhang, Akhil Perincherry, Ankit Vora and Hongdong Li
- Abstract summary: This paper proposes a fine-grained self-localization method for outdoor robotics.
The proposed method addresses limitations in existing cross-view localization methods.
It is the first sparse visual-only method that enhances perception in dynamic environments.
- Score: 59.48131378244399
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper proposes a fine-grained self-localization method for outdoor
robotics that utilizes a flexible number of onboard cameras and readily
accessible satellite images. The proposed method addresses limitations in
existing cross-view localization methods that struggle to handle noise sources
such as moving objects and seasonal variations. It is the first sparse
visual-only method that enhances perception in dynamic environments by
detecting view-consistent key points and their corresponding deep features from
ground and satellite views, while removing off-the-ground objects and
establishing homography transformation between the two views. Moreover, the
proposed method incorporates a spatial embedding approach that leverages camera
intrinsic and extrinsic information to reduce the ambiguity of purely visual
matching, leading to improved feature matching and overall pose estimation
accuracy. The method exhibits strong generalization and is robust to
environmental changes, requiring only geo-poses as ground truth. Extensive
experiments on the KITTI and Ford Multi-AV Seasonal datasets demonstrate that
our proposed method outperforms existing state-of-the-art methods, achieving
median spatial accuracy errors below $0.5$ meters along the lateral and
longitudinal directions, and a median orientation accuracy error below 2
degrees.
Related papers
- Fixation-based Self-calibration for Eye Tracking in VR Headsets [0.21561701531034413]
The proposed method is based on the assumptions that the user's viewpoint can freely move.
fixations are first detected from the time-series data of uncalibrated gaze directions.
The calibration parameters are optimized by minimizing the sum of a dispersion metrics of the PoRs.
arXiv Detail & Related papers (2023-11-01T09:34:15Z) - Breaking Modality Disparity: Harmonized Representation for Infrared and
Visible Image Registration [66.33746403815283]
We propose a scene-adaptive infrared and visible image registration.
We employ homography to simulate the deformation between different planes.
We propose the first ground truth available misaligned infrared and visible image dataset.
arXiv Detail & Related papers (2023-04-12T06:49:56Z) - Cross-View Visual Geo-Localization for Outdoor Augmented Reality [11.214903134756888]
We address the problem of geo-pose estimation by cross-view matching of query ground images to a geo-referenced aerial satellite image database.
We propose a new transformer neural network-based model and a modified triplet ranking loss for joint location and orientation estimation.
Experiments on several benchmark cross-view geo-localization datasets show that our model achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-03-28T01:58:03Z) - Active Gaze Control for Foveal Scene Exploration [124.11737060344052]
We propose a methodology to emulate how humans and robots with foveal cameras would explore a scene.
The proposed method achieves an increase in detection F1-score of 2-3 percentage points for the same number of gaze shifts.
arXiv Detail & Related papers (2022-08-24T14:59:28Z) - Beyond Cross-view Image Retrieval: Highly Accurate Vehicle Localization
Using Satellite Image [91.29546868637911]
This paper addresses the problem of vehicle-mounted camera localization by matching a ground-level image with an overhead-view satellite map.
The key idea is to formulate the task as pose estimation and solve it by neural-net based optimization.
Experiments on standard autonomous vehicle localization datasets have confirmed the superiority of the proposed method.
arXiv Detail & Related papers (2022-04-10T19:16:58Z) - Occlusion-Robust Object Pose Estimation with Holistic Representation [42.27081423489484]
State-of-the-art (SOTA) object pose estimators take a two-stage approach.
We develop a novel occlude-and-blackout batch augmentation technique.
We also develop a multi-precision supervision architecture to encourage holistic pose representation learning.
arXiv Detail & Related papers (2021-10-22T08:00:26Z) - Detect and Locate: A Face Anti-Manipulation Approach with Semantic and
Noise-level Supervision [67.73180660609844]
We propose a conceptually simple but effective method to efficiently detect forged faces in an image.
The proposed scheme relies on a segmentation map that delivers meaningful high-level semantic information clues about the image.
The proposed model achieves state-of-the-art detection accuracy and remarkable localization performance.
arXiv Detail & Related papers (2021-07-13T02:59:31Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z) - Multi-view Drone-based Geo-localization via Style and Spatial Alignment [47.95626612936813]
Multi-view multi-source geo-localization serves as an important auxiliary method of GPS positioning by matching drone-view image and satellite-view image with pre-annotated GPS tag.
We propose an elegant orientation-based method to align the patterns and introduce a new branch to extract aligned partial feature.
arXiv Detail & Related papers (2020-06-23T15:44:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.