Visualisation of a multidimensional point cloud as a 3D swarm of avatars
- URL: http://arxiv.org/abs/2504.06751v1
- Date: Wed, 09 Apr 2025 10:14:33 GMT
- Title: Visualisation of a multidimensional point cloud as a 3D swarm of avatars
- Authors: Leszek Luchowski, Dariusz Pojda,
- Abstract summary: The article presents an innovative approach to the visualisation of multidimensional data, using icons inspired by Chernoff faces.<n>The approach merges classical projection techniques with the assignment of particular data dimensions to mimic features.<n>The technique is implemented as a plugin to the dpVision open-source image handling platform.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The article presents an innovative approach to the visualisation of multidimensional data, using icons inspired by Chernoff faces. The approach merges classical projection techniques with the assignment of particular data dimensions to mimic features, capitalizing on the natural ability of the human brain to interpret facial expressions. The technique is implemented as a plugin to the dpVision open-source image handling platform. The plugin allows the data to be interactively explored in the form of a swarm of "totems" whose position in hyperspace as well as facial features represent various aspects of the data. Sample visualisations, based on synthetic test data as well as the vinhoverde 15-dimensional database on Portuguese wines, confirm the usefulness of our approach to the analysis of complex data structures.
Related papers
- IAAO: Interactive Affordance Learning for Articulated Objects in 3D Environments [56.85804719947]
We present IAAO, a framework that builds an explicit 3D model for intelligent agents to gain understanding of articulated objects in their environment through interaction.<n>We first build hierarchical features and label fields for each object state using 3D Gaussian Splatting (3DGS) by distilling mask features and view-consistent labels from multi-view images.<n>We then perform object- and part-level queries on the 3D Gaussian primitives to identify static and articulated elements, estimating global transformations and local articulation parameters along with affordances.
arXiv Detail & Related papers (2025-04-09T12:36:48Z) - Textured Mesh Saliency: Bridging Geometry and Texture for Human Perception in 3D Graphics [50.23625950905638]
We present a new dataset for textured mesh saliency, created through an innovative eye-tracking experiment in a six degrees of freedom (6-DOF) VR environment.<n>Our proposed model predicts saliency maps for textured mesh surfaces by treating each triangular face as an individual unit and assigning a saliency density value to reflect the importance of each local surface region.
arXiv Detail & Related papers (2024-12-11T08:27:33Z) - Formula-Supervised Visual-Geometric Pre-training [23.060257369945013]
We introduce Formula-Supervised Visual-Geometric Pre-training (FSVGP)
FSVGP is a novel synthetic pre-training method that automatically generates aligned synthetic images and point clouds from mathematical formulas.
Our experimental results show that FSVGP pre-trains more effectively than VisualAtom and PC-FractalDB across six tasks.
arXiv Detail & Related papers (2024-09-20T14:24:52Z) - Enhancing Generalizability of Representation Learning for Data-Efficient 3D Scene Understanding [50.448520056844885]
We propose a generative Bayesian network to produce diverse synthetic scenes with real-world patterns.
A series of experiments robustly display our method's consistent superiority over existing state-of-the-art pre-training approaches.
arXiv Detail & Related papers (2024-06-17T07:43:53Z) - DataViz3D: An Novel Method Leveraging Online Holographic Modeling for
Extensive Dataset Preprocessing and Visualization [0.9790236766474201]
DataViz3D transforms complex datasets into interactive 3D spatial models using holographic technology.
This tool enables users to generate scatter plot within a 3D space, accurately mapped to the XYZ coordinates of the dataset.
arXiv Detail & Related papers (2024-01-18T23:02:08Z) - Parametric Depth Based Feature Representation Learning for Object
Detection and Segmentation in Bird's Eye View [44.78243406441798]
This paper focuses on leveraging geometry information, such as depth, to model such feature transformation.
We first lift the 2D image features to the 3D space defined for the ego vehicle via a predicted parametric depth distribution for each pixel in each view.
We then aggregate the 3D feature volume based on the 3D space occupancy derived from depth to the BEV frame.
arXiv Detail & Related papers (2023-07-09T06:07:22Z) - Neural Volumetric Object Selection [126.04480613166194]
We introduce an approach for selecting objects in neural volumetric 3D representations, such as multi-plane images (MPI) and neural radiance fields (NeRF)
Our approach takes a set of foreground and background 2D user scribbles in one view and automatically estimates a 3D segmentation of the desired object, which can be rendered into novel views.
arXiv Detail & Related papers (2022-05-30T08:55:20Z) - Facial Geometric Detail Recovery via Implicit Representation [147.07961322377685]
We present a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image.
Our method combines high-quality texture completion with the powerful expressiveness of implicit surfaces.
Our method not only recovers accurate facial details but also decomposes normals, albedos, and shading parts in a self-supervised way.
arXiv Detail & Related papers (2022-03-18T01:42:59Z) - UnProjection: Leveraging Inverse-Projections for Visual Analytics of
High-Dimensional Data [63.74032987144699]
We present NNInv, a deep learning technique with the ability to approximate the inverse of any projection or mapping.
NNInv learns to reconstruct high-dimensional data from any arbitrary point on a 2D projection space, giving users the ability to interact with the learned high-dimensional representation in a visual analytics system.
arXiv Detail & Related papers (2021-11-02T17:11:57Z) - MANet: Multimodal Attention Network based Point- View fusion for 3D
Shape Recognition [0.5371337604556311]
This paper proposes a fusion network based on multimodal attention mechanism for 3D shape recognition.
Considering the limitations of multi-view data, we introduce a soft attention scheme, which can use the global point-cloud features to filter the multi-view features.
More specifically, we obtain the enhanced multi-view features by mining the contribution of each multi-view image to the overall shape recognition.
arXiv Detail & Related papers (2020-02-28T07:00:14Z) - Genetic Programming for Evolving a Front of Interpretable Models for
Data Visualisation [4.4181317696554325]
We propose a genetic programming approach named GPtSNE for evolving interpretable mappings from a dataset to high-quality visualisations.
A multi-objective approach is designed that produces a variety of visualisations in a single run which give different trade-offs between visual quality and model complexity.
arXiv Detail & Related papers (2020-01-27T04:03:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.