NeuroDAVIS: A neural network model for data visualization
- URL: http://arxiv.org/abs/2304.01222v1
- Date: Sat, 1 Apr 2023 21:20:34 GMT
- Title: NeuroDAVIS: A neural network model for data visualization
- Authors: Chayan Maitra, Dibyendu B. Seal and Rajat K. De
- Abstract summary: We introduce a novel unsupervised deep neural network model, called NeuroDAVIS, for data visualization.
NeuroDAVIS is capable of extracting important features from the data, without assuming any data distribution.
It has been shown theoritically that neighbourhood relationship of the data in high dimension remains preserved in lower dimension.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The task of dimensionality reduction and visualization of high-dimensional
datasets remains a challenging problem since long. Modern high-throughput
technologies produce newer high-dimensional datasets having multiple views with
relatively new data types. Visualization of these datasets require proper
methodology that can uncover hidden patterns in the data without affecting the
local and global structures within the data. To this end, however, very few
such methodology exist, which can realise this task. In this work, we have
introduced a novel unsupervised deep neural network model, called NeuroDAVIS,
for data visualization. NeuroDAVIS is capable of extracting important features
from the data, without assuming any data distribution, and visualize
effectively in lower dimension. It has been shown theoritically that
neighbourhood relationship of the data in high dimension remains preserved in
lower dimension. The performance of NeuroDAVIS has been evaluated on a wide
variety of synthetic and real high-dimensional datasets including numeric,
textual, image and biological data. NeuroDAVIS has been highly competitive
against both t-Distributed Stochastic Neighbor Embedding (t-SNE) and Uniform
Manifold Approximation and Projection (UMAP) with respect to visualization
quality, and preservation of data size, shape, and both local and global
structure. It has outperformed Fast interpolation-based t-SNE (Fit-SNE), a
variant of t-SNE, for most of the high-dimensional datasets as well. For the
biological datasets, besides t-SNE, UMAP and Fit-SNE, NeuroDAVIS has also
performed well compared to other state-of-the-art algorithms, like Potential of
Heat-diffusion for Affinity-based Trajectory Embedding (PHATE) and the siamese
neural network-based method, called IVIS. Downstream classification and
clustering analyses have also revealed favourable results for
NeuroDAVIS-generated embeddings.
Related papers
- Sampling-guided Heterogeneous Graph Neural Network with Temporal Smoothing for Scalable Longitudinal Data Imputation [17.81217890585335]
We propose a novel framework, the Sampling-guided Heterogeneous Graph Neural Network (SHT-GNN), to tackle the challenge of missing data imputation.
By leveraging subject-wise mini-batch sampling and a multi-layer temporal smoothing mechanism, SHT-GNN efficiently scales to large datasets.
Experiments on both synthetic and real-world datasets, including the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, demonstrate that SHT-GNN significantly outperforms existing imputation methods.
arXiv Detail & Related papers (2024-11-07T17:41:07Z) - A Generative Self-Supervised Framework using Functional Connectivity in
fMRI Data [15.211387244155725]
Deep neural networks trained on Functional Connectivity (FC) networks extracted from functional Magnetic Resonance Imaging (fMRI) data have gained popularity.
Recent research on the application of Graph Neural Network (GNN) to FC suggests that exploiting the time-varying properties of the FC could significantly improve the accuracy and interpretability of the model prediction.
High cost of acquiring high-quality fMRI data and corresponding labels poses a hurdle to their application in real-world settings.
We propose a generative SSL approach that is tailored to effectively harnesstemporal information within dynamic FC.
arXiv Detail & Related papers (2023-12-04T16:14:43Z) - Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective [64.04617968947697]
We introduce a novel data-model co-design perspective: to promote superior weight sparsity.
Specifically, customized Visual Prompts are mounted to upgrade neural Network sparsification in our proposed VPNs framework.
arXiv Detail & Related papers (2023-12-03T13:50:24Z) - Transferability of coVariance Neural Networks and Application to
Interpretable Brain Age Prediction using Anatomical Features [119.45320143101381]
Graph convolutional networks (GCN) leverage topology-driven graph convolutional operations to combine information across the graph for inference tasks.
We have studied GCNs with covariance matrices as graphs in the form of coVariance neural networks (VNNs)
VNNs inherit the scale-free data processing architecture from GCNs and here, we show that VNNs exhibit transferability of performance over datasets whose covariance matrices converge to a limit object.
arXiv Detail & Related papers (2023-05-02T22:15:54Z) - Supervised Feature Selection with Neuron Evolution in Sparse Neural
Networks [17.12834153477201]
We propose a novel resource-efficient supervised feature selection method using sparse neural networks.
By gradually pruning the uninformative features from the input layer of a sparse neural network trained from scratch, NeuroFS derives an informative subset of features efficiently.
NeuroFS achieves the highest ranking-based score among the considered state-of-the-art supervised feature selection models.
arXiv Detail & Related papers (2023-03-10T17:09:55Z) - Predicting Brain Age using Transferable coVariance Neural Networks [119.45320143101381]
We have recently studied covariance neural networks (VNNs) that operate on sample covariance matrices.
In this paper, we demonstrate the utility of VNNs in inferring brain age using cortical thickness data.
Our results show that VNNs exhibit multi-scale and multi-site transferability for inferring brain age
In the context of brain age in Alzheimer's disease (AD), our experiments show that i) VNN outputs are interpretable as brain age predicted using VNNs is significantly elevated for AD with respect to healthy subjects.
arXiv Detail & Related papers (2022-10-28T18:58:34Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Comparing SNNs and RNNs on Neuromorphic Vision Datasets: Similarities
and Differences [36.82069150045153]
Spiking neural networks (SNNs) and recurrent neural networks (RNNs) are benchmarked on neuromorphic data.
In this work, we make a systematic study to compare SNNs and RNNs on neuromorphic data.
arXiv Detail & Related papers (2020-05-02T10:19:37Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z) - FsNet: Feature Selection Network on High-dimensional Biological Data [16.212816276636087]
We propose a deep neural network (DNN)-based, nonlinear feature selection method, called the feature selection network (FsNet) for high-dimensional and small number of sample data.
FsNet comprises a selection layer that selects features and a reconstruction layer that stabilizes the training.
Because a large number of parameters in the selection and reconstruction layers can easily result in overfitting under a limited number of samples, we use two tiny networks to predict the large, virtual weight matrices of the selection and reconstruction layers.
arXiv Detail & Related papers (2020-01-23T00:49:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.