Validation of Vector Data using Oblique Images
- URL: http://arxiv.org/abs/2206.09038v1
- Date: Fri, 17 Jun 2022 22:45:31 GMT
- Title: Validation of Vector Data using Oblique Images
- Authors: Pragyana Mishra, Eyal Ofek, Gur Kimchi
- Abstract summary: This paper presents a robust and scalable algorithm to detect inconsistencies in vector data using oblique images.
The algorithm uses image descriptors to encode the local appearance of a geospatial entity in images.
A Support Vector Machine is trained to detect image descriptors that are not consistent with underlying vector data.
- Score: 10.435599970058297
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Oblique images are aerial photographs taken at oblique angles to the earth's
surface. Projections of vector and other geospatial data in these images depend
on camera parameters, positions of the geospatial entities, surface terrain,
occlusions, and visibility. This paper presents a robust and scalable algorithm
to detect inconsistencies in vector data using oblique images. The algorithm
uses image descriptors to encode the local appearance of a geospatial entity in
images. These image descriptors combine color, pixel-intensity gradients,
texture, and steerable filter responses. A Support Vector Machine classifier is
trained to detect image descriptors that are not consistent with underlying
vector data, digital elevation maps, building models, and camera parameters. In
this paper, we train the classifier on visible road segments and non-road data.
Thereafter, the trained classifier detects inconsistencies in vectors, which
include both occluded and misaligned road segments. The consistent road
segments validate our vector, DEM, and 3-D model data for those areas while
inconsistent segments point out errors. We further show that a search for
descriptors that are consistent with visible road segments in the neighborhood
of a misaligned road yields the desired road alignment that is consistent with
pixels in the image.
Related papers
- Semi-supervised segmentation of land cover images using nonlinear
canonical correlation analysis with multiple features and t-SNE [1.7000283696243563]
Image segmentation is a clustering task whereby each pixel is assigned a cluster label.
In this work, by resorting to label only a small quantity of pixels, a new semi-supervised segmentation approach is proposed.
The proposed semi-supervised RBF-CCA algorithm has been implemented on several remotely sensed multispectral images.
arXiv Detail & Related papers (2024-01-22T17:56:07Z) - DIAR: Deep Image Alignment and Reconstruction using Swin Transformers [3.1000291317724993]
We create a dataset that contains images with image distortions.
We create perspective distortions with corresponding ground-truth homographies as labels.
We use our dataset to train Swin transformer models to analyze sequential image data.
arXiv Detail & Related papers (2023-10-17T21:59:45Z) - Linear features segmentation from aerial images [0.0]
We present a method for classifying and segmenting city road traffic dashed lines from aerial images using deep learning models such as U-Net and SegNet.
The annotated data is used to train these models, which are then used to classify and segment the aerial image into two classes: dashed lines and non-dashed lines.
We also extracted the x and y coordinates of each dashed line from the segmentation output, which can be used by city planners to construct a CAD file for digital visualization of the roads.
arXiv Detail & Related papers (2022-12-23T18:51:14Z) - Unsupervised Foggy Scene Understanding via Self Spatial-Temporal Label
Diffusion [51.11295961195151]
We exploit the characteristics of the foggy image sequence of driving scenes to densify the confident pseudo labels.
Based on the two discoveries of local spatial similarity and adjacent temporal correspondence of the sequential image data, we propose a novel Target-Domain driven pseudo label Diffusion scheme.
Our scheme helps the adaptive model achieve 51.92% and 53.84% mean intersection-over-union (mIoU) on two publicly available natural foggy datasets.
arXiv Detail & Related papers (2022-06-10T05:16:50Z) - Learning Hierarchical Graph Representation for Image Manipulation
Detection [50.04902159383709]
The objective of image manipulation detection is to identify and locate the manipulated regions in the images.
Recent approaches mostly adopt the sophisticated Convolutional Neural Networks (CNNs) to capture the tampering artifacts left in the images.
We propose a hierarchical Graph Convolutional Network (HGCN-Net), which consists of two parallel branches.
arXiv Detail & Related papers (2022-01-15T01:54:25Z) - Learning To Segment Dominant Object Motion From Watching Videos [72.57852930273256]
We envision a simple framework for dominant moving object segmentation that neither requires annotated data to train nor relies on saliency priors or pre-trained optical flow maps.
Inspired by a layered image representation, we introduce a technique to group pixel regions according to their affine parametric motion.
This enables our network to learn segmentation of the dominant foreground object using only RGB image pairs as input for both training and inference.
arXiv Detail & Related papers (2021-11-28T14:51:00Z) - Compositional Sketch Search [91.84489055347585]
We present an algorithm for searching image collections using free-hand sketches.
We exploit drawings as a concise and intuitive representation for specifying entire scene compositions.
arXiv Detail & Related papers (2021-06-15T09:38:09Z) - DeepI2P: Image-to-Point Cloud Registration via Deep Classification [71.3121124994105]
DeepI2P is a novel approach for cross-modality registration between an image and a point cloud.
Our method estimates the relative rigid transformation between the coordinate frames of the camera and Lidar.
We circumvent the difficulty by converting the registration problem into a classification and inverse camera projection optimization problem.
arXiv Detail & Related papers (2021-04-08T04:27:32Z) - Self-supervised Segmentation via Background Inpainting [96.10971980098196]
We introduce a self-supervised detection and segmentation approach that can work with single images captured by a potentially moving camera.
We exploit a self-supervised loss function that we exploit to train a proposal-based segmentation network.
We apply our method to human detection and segmentation in images that visually depart from those of standard benchmarks and outperform existing self-supervised methods.
arXiv Detail & Related papers (2020-11-11T08:34:40Z) - Detecting Lane and Road Markings at A Distance with Perspective
Transformer Layers [5.033948921121557]
In existing approaches, the detection accuracy often degrades with the increasing distance.
This is due to the fact that distant lane and road markings occupy a small number of pixels in the image.
Inverse Perspective Mapping can be used to eliminate the perspective distortion, but the inherent can lead to artifacts.
arXiv Detail & Related papers (2020-03-19T03:22:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.