3D Visualization and Spatial Data Mining for Analysis of LULC Images
- URL: http://arxiv.org/abs/2202.00123v1
- Date: Fri, 28 Jan 2022 07:51:31 GMT
- Title: 3D Visualization and Spatial Data Mining for Analysis of LULC Images
- Authors: B. G. Kodge
- Abstract summary: The present study is an attempt made to create a new tool for the analysis of Land Use Land Cover (LUCL) images in 3D visualization.
This study mainly uses spatial data mining techniques on high resolution LULC satellite imagery.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The present study is an attempt made to create a new tool for the analysis of
Land Use Land Cover (LUCL) images in 3D visualization. This study mainly uses
spatial data mining techniques on high resolution LULC satellite imagery.
Visualization of feature space allows exploration of patterns in the image data
and insight into the classification process and related uncertainty. Visual
Data Mining provides added value to image classifications as the user can be
involved in the classification process providing increased confidence in and
understanding of the results. In this study, we present a prototype of image
segmentation, K-Means clustering and 3D visualization tool for visual data
mining (VDM) of LUCL satellite imagery into volume visualization. This volume
based representation divides feature space into spheres or voxels. The
visualization tool is showcased in a classification study of high-resolution
LULC imagery of Latur district (Maharashtra state, India) is used as sample
data.
Related papers
- Neural Clustering based Visual Representation Learning [61.72646814537163]
Clustering is one of the most classic approaches in machine learning and data analysis.
We propose feature extraction with clustering (FEC), which views feature extraction as a process of selecting representatives from data.
FEC alternates between grouping pixels into individual clusters to abstract representatives and updating the deep features of pixels with current representatives.
arXiv Detail & Related papers (2024-03-26T06:04:50Z) - RadOcc: Learning Cross-Modality Occupancy Knowledge through Rendering
Assisted Distillation [50.35403070279804]
3D occupancy prediction is an emerging task that aims to estimate the occupancy states and semantics of 3D scenes using multi-view images.
We propose RadOcc, a Rendering assisted distillation paradigm for 3D Occupancy prediction.
arXiv Detail & Related papers (2023-12-19T03:39:56Z) - A Survey of Graph and Attention Based Hyperspectral Image Classification
Methods for Remote Sensing Data [5.1901440366375855]
The use of Deep Learning techniques for classification in Hyperspectral Imaging (HSI) is rapidly growing.
Recent methods have also explored the usage of Graph Convolution Networks and their unique ability to use node features in prediction.
arXiv Detail & Related papers (2023-10-16T00:42:25Z) - RRSIS: Referring Remote Sensing Image Segmentation [25.538406069768662]
Localizing desired objects from remote sensing images is of great use in practical applications.
Referring image segmentation, which aims at segmenting out the objects to which a given expression refers, has been extensively studied in natural images.
We introduce referring remote sensing image segmentation (RRSIS) to fill in this gap and make some insightful explorations.
arXiv Detail & Related papers (2023-06-14T16:40:19Z) - Object Detection in Hyperspectral Image via Unified Spectral-Spatial
Feature Aggregation [55.9217962930169]
We present S2ADet, an object detector that harnesses the rich spectral and spatial complementary information inherent in hyperspectral images.
S2ADet surpasses existing state-of-the-art methods, achieving robust and reliable results.
arXiv Detail & Related papers (2023-06-14T09:01:50Z) - Multiscale Analysis for Improving Texture Classification [62.226224120400026]
This paper employs the Gaussian-Laplacian pyramid to treat different spatial frequency bands of a texture separately.
We aggregate features extracted from gray and color texture images using bio-inspired texture descriptors, information-theoretic measures, gray-level co-occurrence matrix features, and Haralick statistical features into a single feature vector.
arXiv Detail & Related papers (2022-04-21T01:32:22Z) - Sci-Net: a Scale Invariant Model for Building Detection from Aerial
Images [0.0]
We propose a Scale-invariant neural network (Sci-Net) that is able to segment buildings present in aerial images at different spatial resolutions.
Specifically, we modified the U-Net architecture and fused it with dense Atrous Spatial Pyramid Pooling (ASPP) to extract fine-grained multi-scale representations.
arXiv Detail & Related papers (2021-11-12T16:45:20Z) - Low-Rank Subspaces in GANs [101.48350547067628]
This work introduces low-rank subspaces that enable more precise control of GAN generation.
LowRankGAN is able to find the low-dimensional representation of attribute manifold.
Experiments on state-of-the-art GAN models (including StyleGAN2 and BigGAN) trained on various datasets demonstrate the effectiveness of our LowRankGAN.
arXiv Detail & Related papers (2021-06-08T16:16:32Z) - Improving Point Cloud Semantic Segmentation by Learning 3D Object
Detection [102.62963605429508]
Point cloud semantic segmentation plays an essential role in autonomous driving.
Current 3D semantic segmentation networks focus on convolutional architectures that perform great for well represented classes.
We propose a novel Aware 3D Semantic Detection (DASS) framework that explicitly leverages localization features from an auxiliary 3D object detection task.
arXiv Detail & Related papers (2020-09-22T14:17:40Z) - Detecting the Presence of Vehicles and Equipment in SAR Imagery Using
Image Texture Features [0.0]
We present a methodology for monitoring man-made, construction-like activities in low-resolution SAR imagery.
Our source of data is the European Space Agency Sentinel-l satellite which provides global coverage at a 12-day revisit rate.
Using an exploratory dataset, we trained a support vector machine (SVM), a random binary forest, and a fully-connected neural network for classification.
arXiv Detail & Related papers (2020-09-10T13:59:52Z) - Learning Hyperspectral Feature Extraction and Classification with
ResNeXt Network [2.9967206019304937]
Hyperspectral image (HSI) classification is a standard remote sensing task, in which each image pixel is given a label indicating the physical land-cover on the earth's surface.
The utilization of both the spectral and spatial cues in hyperspectral images has shown improved classification accuracy in hyperspectral image classification.
The use of only 3D Convolutional Neural Networks (3D-CNN) to extract both spatial and spectral cues from Hyperspectral images results in an explosion of parameters hence high computational cost.
We propose network architecture called the MixedSN that utilizes the 3D convolutions to modeling spectral-spatial information
arXiv Detail & Related papers (2020-02-07T01:54:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.