Deep Generative Neural Embeddings for High Dimensional Data
Visualization
- URL: http://arxiv.org/abs/2302.10801v1
- Date: Wed, 25 Jan 2023 14:18:09 GMT
- Title: Deep Generative Neural Embeddings for High Dimensional Data
Visualization
- Authors: Halid Ziya Yerebakan, Gerardo Hermosillo Valadez
- Abstract summary: We propose a visualization technique that utilizes neural network embeddings and a generative network to reconstruct original data.
We have evaluated the effectiveness of this technique in data visualization and compared it to t-SNE and VAE methods.
Our technique has potential applications in human-in-the-loop training, as it allows for independent editing of embedding locations without affecting the optimization process.
- Score: 0.45687771576879593
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We propose a visualization technique that utilizes neural network embeddings
and a generative network to reconstruct original data. This method allows for
independent manipulation of individual image embeddings through its
non-parametric structure, providing more flexibility than traditional
autoencoder approaches. We have evaluated the effectiveness of this technique
in data visualization and compared it to t-SNE and VAE methods. Furthermore, we
have demonstrated the scalability of our method through visualizations on the
ImageNet dataset. Our technique has potential applications in human-in-the-loop
training, as it allows for independent editing of embedding locations without
affecting the optimization process.
Related papers
- Gradient-Free Supervised Learning using Spike-Timing-Dependent Plasticity for Image Recognition [3.087000217989688]
An approach to supervised learning in spiking neural networks is presented using a gradient-free method combined with spike-timing-dependent plasticity for image recognition.
The proposed network architecture is scalable to multiple layers, enabling the development of more complex and deeper SNN models.
arXiv Detail & Related papers (2024-10-21T21:32:17Z) - Deep Domain Adaptation: A Sim2Real Neural Approach for Improving Eye-Tracking Systems [80.62854148838359]
Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate.
We use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data.
Our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.
arXiv Detail & Related papers (2024-03-23T22:32:06Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Defect Classification in Additive Manufacturing Using CNN-Based Vision
Processing [76.72662577101988]
This paper examines two scenarios: first, using convolutional neural networks (CNNs) to accurately classify defects in an image dataset from AM and second, applying active learning techniques to the developed classification model.
This allows the construction of a human-in-the-loop mechanism to reduce the size of the data required to train and generate training data.
arXiv Detail & Related papers (2023-07-14T14:36:58Z) - Distributed Neural Representation for Reactive in situ Visualization [23.80657290203846]
Implicit neural representations (INRs) have emerged as a powerful tool for compressing large-scale volume data.
We develop a distributed neural representation and optimize it for in situ visualization.
Our technique eliminates data exchanges between processes, achieving state-of-the-art compression speed, quality and ratios.
arXiv Detail & Related papers (2023-03-28T03:55:47Z) - Denoising Diffusion Probabilistic Models for Generation of Realistic
Fully-Annotated Microscopy Image Data Sets [1.07539359851877]
In this study, we demonstrate that diffusion models can effectively generate fully-annotated microscopy image data sets.
The proposed pipeline helps to reduce the reliance on manual annotations when training deep learning-based segmentation approaches.
arXiv Detail & Related papers (2023-01-02T14:17:08Z) - DELAD: Deep Landweber-guided deconvolution with Hessian and sparse prior [0.22940141855172028]
We present a model for non-blind image deconvolution that incorporates the classic iterative method into a deep learning application.
We build our network based on the iterative Landweber deconvolution algorithm, which is integrated with trainable convolutional layers to enhance the recovered image structures and details.
arXiv Detail & Related papers (2022-09-30T11:15:03Z) - Neural Data-Dependent Transform for Learned Image Compression [72.86505042102155]
We build a neural data-dependent transform and introduce a continuous online mode decision mechanism to jointly optimize the coding efficiency for each individual image.
The experimental results show the effectiveness of the proposed neural-syntax design and the continuous online mode decision mechanism.
arXiv Detail & Related papers (2022-03-09T14:56:48Z) - NerfingMVS: Guided Optimization of Neural Radiance Fields for Indoor
Multi-view Stereo [97.07453889070574]
We present a new multi-view depth estimation method that utilizes both conventional SfM reconstruction and learning-based priors.
We show that our proposed framework significantly outperforms state-of-the-art methods on indoor scenes.
arXiv Detail & Related papers (2021-09-02T17:54:31Z) - Clustering augmented Self-Supervised Learning: Anapplication to Land
Cover Mapping [10.720852987343896]
We introduce a new method for land cover mapping by using a clustering based pretext task for self-supervised learning.
We demonstrate the effectiveness of the method on two societally relevant applications.
arXiv Detail & Related papers (2021-08-16T19:35:43Z) - Riggable 3D Face Reconstruction via In-Network Optimization [58.016067611038046]
This paper presents a method for riggable 3D face reconstruction from monocular images.
It jointly estimates a personalized face rig and per-image parameters including expressions, poses, and illuminations.
Experiments demonstrate that our method achieves SOTA reconstruction accuracy, reasonable robustness and generalization ability.
arXiv Detail & Related papers (2021-04-08T03:53:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.