ENTIRE: Learning-based Volume Rendering Time Prediction
- URL: http://arxiv.org/abs/2501.12119v1
- Date: Tue, 21 Jan 2025 13:30:16 GMT
- Title: ENTIRE: Learning-based Volume Rendering Time Prediction
- Authors: Zikai Yin, Hamid Gadirov, Jiri Kosinka, Steffen Frey,
- Abstract summary: ENTIRE is a novel approach for volume rendering time prediction.
We first extract a feature vector from a volume that captures its structure that is relevant for rendering time performance.
Our experiments conducted on various datasets demonstrate that our model is capable of efficiently achieving high prediction accuracy with fast response rates.
- Score: 3.890480928425776
- License:
- Abstract: We present ENTIRE, a novel approach for volume rendering time prediction. Time-dependent volume data from simulations or experiments typically comprise complex deforming structures across hundreds or thousands of time steps, which in addition to the camera configuration has a significant impact on rendering performance. We first extract a feature vector from a volume that captures its structure that is relevant for rendering time performance. Then we combine this feature vector with further relevant parameters (e.g. camera setup), and with this perform the final prediction. Our experiments conducted on various datasets demonstrate that our model is capable of efficiently achieving high prediction accuracy with fast response rates. We showcase ENTIRE's capability of enabling dynamic parameter adaptation for stable frame rates and load balancing in two case studies.
Related papers
- DCIts -- Deep Convolutional Interpreter for time series [0.0]
The model is designed so one can robustly determine the optimal window size that captures all necessary interactions within the smallest possible time frame.
It effectively identifies the optimal model order, balancing complexity when incorporating higher-order terms.
These advancements hold significant implications for modeling and understanding dynamic systems, making the model a valuable tool for applied and computational physicists.
arXiv Detail & Related papers (2025-01-08T08:21:58Z) - MATEY: multiscale adaptive foundation models for spatiotemporal physical systems [2.7767126393602726]
We propose two adaptive tokenization schemes that dynamically adjust patch sizes based on local features.
We evaluate the performance of a proposed multiscale adaptive model, MATEY, in a sequence of experiments.
We also demonstrate fine-tuning tasks featuring different physics that models pretrained on PDE data.
arXiv Detail & Related papers (2024-12-29T22:13:16Z) - EDformer: Embedded Decomposition Transformer for Interpretable Multivariate Time Series Predictions [4.075971633195745]
This paper introduces an embedded transformer, 'EDformer', for time series forecasting tasks.
Without altering the fundamental elements, we reuse the Transformer architecture and consider the capable functions of its constituent parts.
The model obtains state-of-the-art predicting results in terms of accuracy and efficiency on complex real-world time series datasets.
arXiv Detail & Related papers (2024-12-16T11:13:57Z) - Adapting to Length Shift: FlexiLength Network for Trajectory Prediction [53.637837706712794]
Trajectory prediction plays an important role in various applications, including autonomous driving, robotics, and scene understanding.
Existing approaches mainly focus on developing compact neural networks to increase prediction precision on public datasets, typically employing a standardized input duration.
We introduce a general and effective framework, the FlexiLength Network (FLN), to enhance the robustness of existing trajectory prediction against varying observation periods.
arXiv Detail & Related papers (2024-03-31T17:18:57Z) - Skeleton2vec: A Self-supervised Learning Framework with Contextualized
Target Representations for Skeleton Sequence [56.092059713922744]
We show that using high-level contextualized features as prediction targets can achieve superior performance.
Specifically, we propose Skeleton2vec, a simple and efficient self-supervised 3D action representation learning framework.
Our proposed Skeleton2vec outperforms previous methods and achieves state-of-the-art results.
arXiv Detail & Related papers (2024-01-01T12:08:35Z) - STDepthFormer: Predicting Spatio-temporal Depth from Video with a
Self-supervised Transformer Model [0.0]
Self-supervised model simultaneously predicts a sequence of future frames from video-input with a spatial-temporal attention network is proposed.
The proposed model leverages prior scene knowledge such as object shape and texture similar to single-image depth inference methods.
It is implicitly capable of forecasting the motion of objects in the scene, rather than requiring complex models involving multi-object detection, segmentation and tracking.
arXiv Detail & Related papers (2023-03-02T12:22:51Z) - Predicting Surface Texture in Steel Manufacturing at Speed [81.90215579427463]
Control of the surface texture of steel strip during the galvanizing and temper rolling processes is essential to satisfy customer requirements.
We propose the use of machine learning to improve accuracy of the transformation from inline laser reflection measurements to a prediction of surface properties.
arXiv Detail & Related papers (2023-01-20T12:11:03Z) - Visual-tactile sensing for Real-time liquid Volume Estimation in
Grasping [58.50342759993186]
We propose a visuo-tactile model for realtime estimation of the liquid inside a deformable container.
We fuse two sensory modalities, i.e., the raw visual inputs from the RGB camera and the tactile cues from our specific tactile sensor.
The robotic system is well controlled and adjusted based on the estimation model in real time.
arXiv Detail & Related papers (2022-02-23T13:38:31Z) - TACTiS: Transformer-Attentional Copulas for Time Series [76.71406465526454]
estimation of time-varying quantities is a fundamental component of decision making in fields such as healthcare and finance.
We propose a versatile method that estimates joint distributions using an attention-based decoder.
We show that our model produces state-of-the-art predictions on several real-world datasets.
arXiv Detail & Related papers (2022-02-07T21:37:29Z) - A Systematic Exploration of Reservoir Computing for Forecasting Complex
Spatiotemporal Dynamics [0.0]
Reservoir computer (RC) is a type of recurrent neural network that has demonstrated success in prediction architecture of intrinsicly chaotic dynamical systems.
We explore the architecture and design choices for a "best in class" RC for a number of characteristic dynamical systems.
We show the application of these choices in scaling up to larger models using localization.
arXiv Detail & Related papers (2022-01-21T22:31:12Z) - Convolutional Tensor-Train LSTM for Spatio-temporal Learning [116.24172387469994]
We propose a higher-order LSTM model that can efficiently learn long-term correlations in the video sequence.
This is accomplished through a novel tensor train module that performs prediction by combining convolutional features across time.
Our results achieve state-of-the-art performance-art in a wide range of applications and datasets.
arXiv Detail & Related papers (2020-02-21T05:00:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.