A new visual quality metric for Evaluating the performance of multidimensional projections
- URL: http://arxiv.org/abs/2407.16309v1
- Date: Tue, 23 Jul 2024 09:02:46 GMT
- Title: A new visual quality metric for Evaluating the performance of multidimensional projections
- Authors: Maniru Ibrahim, Thales Vieira,
- Abstract summary: We propose a new visual quality metric based on human perception.
We show that the proposed metric produces more precise results in analyzing the quality of MP than other previously used metrics.
- Score: 1.6574413179773757
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multidimensional projections (MP) are among the most essential approaches in the visual analysis of multidimensional data. It transforms multidimensional data into two-dimensional representations that may be shown as scatter plots while preserving their similarity with the original data. Human visual perception is frequently used to evaluate the quality of MP. In this work, we propose to study and improve on a well-known map called Local Affine Multidimensional Projection (LAMP), which takes a multidimensional instance and embeds it in Cartesian space via moving least squares deformation. We propose a new visual quality metric based on human perception. The new metric combines three previously used metrics: silhouette coefficient, neighborhood preservation, and silhouette ratio. We show that the proposed metric produces more precise results in analyzing the quality of MP than other previously used metrics. Finally, we describe an algorithm that attempts to overcome a limitation of the LAMP method which requires a similar scale for control points and their counterparts in the Cartesian space.
Related papers
- Towards Unified 3D Object Detection via Algorithm and Data Unification [70.27631528933482]
We build the first unified multi-modal 3D object detection benchmark MM- Omni3D and extend the aforementioned monocular detector to its multi-modal version.
We name the designed monocular and multi-modal detectors as UniMODE and MM-UniMODE, respectively.
arXiv Detail & Related papers (2024-02-28T18:59:31Z) - RadOcc: Learning Cross-Modality Occupancy Knowledge through Rendering
Assisted Distillation [50.35403070279804]
3D occupancy prediction is an emerging task that aims to estimate the occupancy states and semantics of 3D scenes using multi-view images.
We propose RadOcc, a Rendering assisted distillation paradigm for 3D Occupancy prediction.
arXiv Detail & Related papers (2023-12-19T03:39:56Z) - Large-Scale Evaluation of Topic Models and Dimensionality Reduction
Methods for 2D Text Spatialization [2.6034734004409303]
We show that interpretable topic models are beneficial for capturing the structure of text corpora.
We propose guidelines for the effective design of text spatializations based on topic models and dimensionality reductions.
arXiv Detail & Related papers (2023-07-17T14:08:25Z) - Parametric Depth Based Feature Representation Learning for Object
Detection and Segmentation in Bird's Eye View [44.78243406441798]
This paper focuses on leveraging geometry information, such as depth, to model such feature transformation.
We first lift the 2D image features to the 3D space defined for the ego vehicle via a predicted parametric depth distribution for each pixel in each view.
We then aggregate the 3D feature volume based on the 3D space occupancy derived from depth to the BEV frame.
arXiv Detail & Related papers (2023-07-09T06:07:22Z) - Contour Context: Abstract Structural Distribution for 3D LiDAR Loop
Detection and Metric Pose Estimation [31.968749056155467]
This paper proposes a simple, effective, and efficient topological loop closure detection pipeline with accurate 3-DoF metric pose estimation.
We interpret the Cartesian birds' eye view (BEV) image projected from 3D LiDAR points as layered distribution of structures.
A retrieval key is designed to accelerate the search of a database indexed by layered KD-trees.
arXiv Detail & Related papers (2023-02-13T07:18:24Z) - Learning-Based Dimensionality Reduction for Computing Compact and
Effective Local Feature Descriptors [101.62384271200169]
A distinctive representation of image patches in form of features is a key component of many computer vision and robotics tasks.
We investigate multi-layer perceptrons (MLPs) to extract low-dimensional but high-quality descriptors.
We consider different applications, including visual localization, patch verification, image matching and retrieval.
arXiv Detail & Related papers (2022-09-27T17:59:04Z) - Hierarchical Nearest Neighbor Graph Embedding for Efficient
Dimensionality Reduction [25.67957712837716]
We introduce a novel method based on a hierarchy built on 1-nearest neighbor graphs in the original space.
The proposal is an optimization-free projection that is competitive with the latest versions of t-SNE and UMAP.
In the paper, we argue about the soundness of the proposed method and evaluate it on a diverse collection of datasets with sizes varying from 1K to 11M samples and dimensions from 28 to 16K.
arXiv Detail & Related papers (2022-03-24T11:41:16Z) - Incorporating Texture Information into Dimensionality Reduction for
High-Dimensional Images [65.74185962364211]
We present a method for incorporating neighborhood information into distance-based dimensionality reduction methods.
Based on a classification of different methods for comparing image patches, we explore a number of different approaches.
arXiv Detail & Related papers (2022-02-18T13:17:43Z) - Direct Multi-view Multi-person 3D Pose Estimation [138.48139701871213]
We present Multi-view Pose transformer (MvP) for estimating multi-person 3D poses from multi-view images.
MvP directly regresses the multi-person 3D poses in a clean and efficient way, without relying on intermediate tasks.
We show experimentally that our MvP model outperforms the state-of-the-art methods on several benchmarks while being much more efficient.
arXiv Detail & Related papers (2021-11-07T13:09:20Z) - UnProjection: Leveraging Inverse-Projections for Visual Analytics of
High-Dimensional Data [63.74032987144699]
We present NNInv, a deep learning technique with the ability to approximate the inverse of any projection or mapping.
NNInv learns to reconstruct high-dimensional data from any arbitrary point on a 2D projection space, giving users the ability to interact with the learned high-dimensional representation in a visual analytics system.
arXiv Detail & Related papers (2021-11-02T17:11:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.