Into the Void: Mapping the Unseen Gaps in High Dimensional Data
- URL: http://arxiv.org/abs/2501.15273v1
- Date: Sat, 25 Jan 2025 16:57:21 GMT
- Title: Into the Void: Mapping the Unseen Gaps in High Dimensional Data
- Authors: Xinyu Zhang, Tyler Estro, Geoff Kuenning, Erez Zadok, Klaus Mueller,
- Abstract summary: We present a comprehensive pipeline, augmented by a visual analytics system named GapMiner''
It is aimed at exploring and exploiting untapped opportunities within the empty areas of high-dimensional datasets.
- Score: 23.226089369715016
- License:
- Abstract: We present a comprehensive pipeline, augmented by a visual analytics system named ``GapMiner'', that is aimed at exploring and exploiting untapped opportunities within the empty areas of high-dimensional datasets. Our approach begins with an initial dataset and then uses a novel Empty Space Search Algorithm (ESA) to identify the center points of these uncharted voids, which are regarded as reservoirs containing potentially valuable novel configurations. Initially, this process is guided by user interactions facilitated by GapMiner. GapMiner visualizes the Empty Space Configurations (ESC) identified by the search within the context of the data, enabling domain experts to explore and adjust ESCs using a linked parallel-coordinate display. These interactions enhance the dataset and contribute to the iterative training of a connected deep neural network (DNN). As the DNN trains, it gradually assumes the task of identifying high-potential ESCs, diminishing the need for direct user involvement. Ultimately, once the DNN achieves adequate accuracy, it autonomously guides the exploration of optimal configurations by predicting performance and refining configurations, using a combination of gradient ascent and improved empty-space searches. Domain users were actively engaged throughout the development of our system. Our findings demonstrate that our methodology consistently produces substantially superior novel configurations compared to conventional randomization-based methods. We illustrate the effectiveness of our method through several case studies addressing various objectives, including parameter optimization, adversarial learning, and reinforcement learning.
Related papers
- Hierarchical Features Matter: A Deep Exploration of GAN Priors for Improved Dataset Distillation [51.44054828384487]
We propose a novel parameterization method dubbed Hierarchical Generative Latent Distillation (H-GLaD)
This method systematically explores hierarchical layers within the generative adversarial networks (GANs)
In addition, we introduce a novel class-relevant feature distance metric to alleviate the computational burden associated with synthetic dataset evaluation.
arXiv Detail & Related papers (2024-06-09T09:15:54Z) - Unsupervised Semantic Segmentation Through Depth-Guided Feature Correlation and Sampling [14.88236554564287]
In this work, we build upon advances in unsupervised learning by incorporating information about the structure of a scene into the training process.
We achieve this by (1) learning depth-feature correlation by spatially correlate the feature maps with the depth maps to induce knowledge about the structure of the scene.
We then implement farthest-point sampling to more effectively select relevant features by utilizing 3D sampling techniques on depth information of the scene.
arXiv Detail & Related papers (2023-09-21T11:47:01Z) - Deep Kernel Methods Learn Better: From Cards to Process Optimization [0.7587345054583298]
We show that DKL with active learning can produce a more compact and smooth latent space.
We demonstrate this behavior using a simple cards data set and extend it to the optimization of domain-generated trajectories in physical systems.
arXiv Detail & Related papers (2023-03-25T20:21:29Z) - On the Effective Usage of Priors in RSS-based Localization [56.68864078417909]
We propose a Received Signal Strength (RSS) fingerprint and convolutional neural network-based algorithm, LocUNet.
In this paper, we study the localization problem in dense urban settings.
We first recognize LocUNet's ability to learn the underlying prior distribution of the Rx position or Rx and transmitter (Tx) association preferences from the training data, and attribute its high performance to these.
arXiv Detail & Related papers (2022-11-28T00:31:02Z) - Towards Scale Consistent Monocular Visual Odometry by Learning from the
Virtual World [83.36195426897768]
We propose VRVO, a novel framework for retrieving the absolute scale from virtual data.
We first train a scale-aware disparity network using both monocular real images and stereo virtual data.
The resulting scale-consistent disparities are then integrated with a direct VO system.
arXiv Detail & Related papers (2022-03-11T01:51:54Z) - Enhanced Exploration in Neural Feature Selection for Deep Click-Through
Rate Prediction Models via Ensemble of Gating Layers [7.381829794276824]
The goal of neural feature selection (NFS) is to choose a relatively small subset of features with the best explanatory power.
Gating approach inserts a set of differentiable binary gates to drop less informative features.
To improve the exploration capacity of gradient-based solutions, we propose a simple but effective ensemble learning approach.
arXiv Detail & Related papers (2021-12-07T04:37:05Z) - Adaptive Informative Path Planning Using Deep Reinforcement Learning for
UAV-based Active Sensing [2.6519061087638014]
We propose a new approach for informative path planning based on deep reinforcement learning (RL)
Our method combines Monte Carlo tree search with an offline-learned neural network predicting informative sensing actions.
By deploying the trained network during a mission, our method enables sample-efficient online replanning on physical platforms with limited computational resources.
arXiv Detail & Related papers (2021-09-28T09:00:55Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - Densely Nested Top-Down Flows for Salient Object Detection [137.74130900326833]
This paper revisits the role of top-down modeling in salient object detection.
It designs a novel densely nested top-down flows (DNTDF)-based framework.
In every stage of DNTDF, features from higher levels are read in via the progressive compression shortcut paths (PCSP)
arXiv Detail & Related papers (2021-02-18T03:14:02Z) - Overcoming Catastrophic Forgetting via Direction-Constrained
Optimization [43.53836230865248]
We study a new design of the optimization algorithm for training deep learning models with a fixed architecture of the classification network in a continual learning framework.
We present our direction-constrained optimization (DCO) method, where for each task we introduce a linear autoencoder to approximate its corresponding top forbidden principal directions.
We demonstrate that our algorithm performs favorably compared to other state-of-art regularization-based continual learning methods.
arXiv Detail & Related papers (2020-11-25T08:45:21Z) - Learning to Hash with Graph Neural Networks for Recommender Systems [103.82479899868191]
Graph representation learning has attracted much attention in supporting high quality candidate search at scale.
Despite its effectiveness in learning embedding vectors for objects in the user-item interaction network, the computational costs to infer users' preferences in continuous embedding space are tremendous.
We propose a simple yet effective discrete representation learning framework to jointly learn continuous and discrete codes.
arXiv Detail & Related papers (2020-03-04T06:59:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.