Compressive Modeling and Visualization of Multivariate Scientific Data using Implicit Neural Representation
- URL: http://arxiv.org/abs/2510.15535v1
- Date: Fri, 17 Oct 2025 11:09:55 GMT
- Title: Compressive Modeling and Visualization of Multivariate Scientific Data using Implicit Neural Representation
- Authors: Abhay Kumar Dwivedi, Shanu Saklani, Soumya Dutta,
- Abstract summary: We develop compressed neural representations for datasets containing tens to hundreds of variables.<n>Our approach utilizes a single network to learn rendering representations for all data variables simultaneously.<n>We demonstrate superior performance in terms of reconstructed data quality, visualization quality, preservation of dependency information among variables, and storage efficiency.
- Score: 5.742682177744733
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The extensive adoption of Deep Neural Networks has led to their increased utilization in challenging scientific visualization tasks. Recent advancements in building compressed data models using implicit neural representations have shown promising results for tasks like spatiotemporal volume visualization and super-resolution. Inspired by these successes, we develop compressed neural representations for multivariate datasets containing tens to hundreds of variables. Our approach utilizes a single network to learn representations for all data variables simultaneously through parameter sharing. This allows us to achieve state-of-the-art data compression. Through comprehensive evaluations, we demonstrate superior performance in terms of reconstructed data quality, rendering and visualization quality, preservation of dependency information among variables, and storage efficiency.
Related papers
- Detail Across Scales: Multi-Scale Enhancement for Full Spectrum Neural Representations [4.899720537787801]
Implicit neural representations (INRs) have emerged as a compact and parametric alternative to discrete array-based data representations.<n>We propose WIEN-INR, a wavelet-informed implicit neural representation that distributes modeling across different resolution scales.<n>We show that WIEN-INR achieves superior reconstruction fidelity while maintaining a compact model size.
arXiv Detail & Related papers (2025-09-19T00:15:39Z) - Representation-Enhanced Neural Knowledge Integration with Application to Large-Scale Medical Ontology Learning [3.010503480024405]
We propose a theoretically guaranteed statistical framework, called RENKI, to enable simultaneous learning of relation types.
The proposed framework incorporates representation learning output into initial entity embedding of a neural network that approximates the score function for the knowledge graph.
We demonstrate the effect of weighting in the presence of heterogeneous relations and the benefit of incorporating representation learning in nonparametric models.
arXiv Detail & Related papers (2024-10-09T21:38:48Z) - Feature-to-Image Data Augmentation: Improving Model Feature Extraction with Cluster-Guided Synthetic Samples [4.041834517339835]
This study introduces FICAug, a novel feature-to-image data augmentation framework.<n>It is designed to improve model generalization under limited data conditions by generating structured synthetic samples.<n> Experimental results demonstrate that FICAug significantly improves classification accuracy.
arXiv Detail & Related papers (2024-09-26T09:51:08Z) - A Simple Background Augmentation Method for Object Detection with Diffusion Model [53.32935683257045]
In computer vision, it is well-known that a lack of data diversity will impair model performance.
We propose a simple yet effective data augmentation approach by leveraging advancements in generative models.
Background augmentation, in particular, significantly improves the models' robustness and generalization capabilities.
arXiv Detail & Related papers (2024-08-01T07:40:00Z) - Uncovering the Hidden Cost of Model Compression [43.62624133952414]
Visual Prompting has emerged as a pivotal method for transfer learning in computer vision.
Model compression detrimentally impacts the performance of visual prompting-based transfer.
However, negative effects on calibration are not present when models are compressed via quantization.
arXiv Detail & Related papers (2023-08-29T01:47:49Z) - Distributed Neural Representation for Reactive in situ Visualization [23.80657290203846]
Implicit neural representations (INRs) have emerged as a powerful tool for compressing large-scale volume data.
We develop a distributed neural representation and optimize it for in situ visualization.
Our technique eliminates data exchanges between processes, achieving state-of-the-art compression speed, quality and ratios.
arXiv Detail & Related papers (2023-03-28T03:55:47Z) - Dynamic Latent Separation for Deep Learning [67.62190501599176]
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data.
Here, we develop an approach that improves expressiveness, provides partial interpretation, and is not restricted to specific applications.
arXiv Detail & Related papers (2022-10-07T17:56:53Z) - Multi-dataset Training of Transformers for Robust Action Recognition [75.5695991766902]
We study the task of robust feature representations, aiming to generalize well on multiple datasets for action recognition.
Here, we propose a novel multi-dataset training paradigm, MultiTrain, with the design of two new loss terms, namely informative loss and projection loss.
We verify the effectiveness of our method on five challenging datasets, Kinetics-400, Kinetics-700, Moments-in-Time, Activitynet and Something-something-v2.
arXiv Detail & Related papers (2022-09-26T01:30:43Z) - IDLat: An Importance-Driven Latent Generation Method for Scientific Data [12.93181915755184]
We present a novel importance-driven latent representation to facilitate domain-interest-guided scientific data visualization and analysis.
We utilize spatial importance maps to represent various scientific interests and take them as the input to a feature transformation network to guide latent generation.
arXiv Detail & Related papers (2022-08-05T18:23:22Z) - An Empirical Investigation of Commonsense Self-Supervision with
Knowledge Graphs [67.23285413610243]
Self-supervision based on the information extracted from large knowledge graphs has been shown to improve the generalization of language models.
We study the effect of knowledge sampling strategies and sizes that can be used to generate synthetic data for adapting language models.
arXiv Detail & Related papers (2022-05-21T19:49:04Z) - Video Coding for Machine: Compact Visual Representation Compression for
Intelligent Collaborative Analytics [101.35754364753409]
Video Coding for Machines (VCM) is committed to bridging to an extent separate research tracks of video/image compression and feature compression.
This paper summarizes VCM methodology and philosophy based on existing academia and industrial efforts.
arXiv Detail & Related papers (2021-10-18T12:42:13Z) - Revisit Visual Representation in Analytics Taxonomy: A Compression
Perspective [69.99087941471882]
We study the problem of supporting multiple machine vision analytics tasks with the compressed visual representation.
By utilizing the intrinsic transferability among different tasks, our framework successfully constructs compact and expressive representations at low bit-rates.
In order to impose compactness in the representations, we propose a codebook-based hyperprior.
arXiv Detail & Related papers (2021-06-16T01:44:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.