Smart Data Representations: Impact on the Accuracy of Deep Neural
Networks
- URL: http://arxiv.org/abs/2111.09128v1
- Date: Wed, 17 Nov 2021 14:06:08 GMT
- Title: Smart Data Representations: Impact on the Accuracy of Deep Neural
Networks
- Authors: Oliver Neumann, Nicole Ludwig, Marian Turowski, Benedikt Heidrich,
Veit Hagenmeyer, Ralf Mikut
- Abstract summary: We analyze the impact of data representations on the performance of Deep Neural Networks using energy time series forecasting.
The results show that, depending on the forecast horizon, the same data representations can have a positive or negative impact on the accuracy of Deep Neural Networks.
- Score: 0.2446672595462589
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Neural Networks are able to solve many complex tasks with less
engineering effort and better performance. However, these networks often use
data for training and evaluation without investigating its representation,
i.e.~the form of the used data. In the present paper, we analyze the impact of
data representations on the performance of Deep Neural Networks using energy
time series forecasting. Based on an overview of exemplary data
representations, we select four exemplary data representations and evaluate
them using two different Deep Neural Network architectures and three
forecasting horizons on real-world energy time series. The results show that,
depending on the forecast horizon, the same data representations can have a
positive or negative impact on the accuracy of Deep Neural Networks.
Related papers
- Steinmetz Neural Networks for Complex-Valued Data [23.80312814400945]
We introduce a new approach to processing complex-valued data using DNNs consisting of parallel real-valuedetzworks with coupled outputs.
Our proposed class of architectures, referred to as Steinmetz Neural Networks, leverage multi-view learning to construct more interpretable representations within the latent space.
Our numerical experiments depict the improved performance and to additive noise, afforded by these networks on benchmark datasets and synthetic examples.
arXiv Detail & Related papers (2024-09-16T08:26:06Z) - Efficient and Accurate Hyperspectral Image Demosaicing with Neural Network Architectures [3.386560551295746]
This study investigates the effectiveness of neural network architectures in hyperspectral image demosaicing.
We introduce a range of network models and modifications, and compare them with classical methods and existing reference network approaches.
Results indicate that our networks outperform or match reference models in both datasets demonstrating exceptional performance.
arXiv Detail & Related papers (2023-12-21T08:02:49Z) - Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective [64.04617968947697]
We introduce a novel data-model co-design perspective: to promote superior weight sparsity.
Specifically, customized Visual Prompts are mounted to upgrade neural Network sparsification in our proposed VPNs framework.
arXiv Detail & Related papers (2023-12-03T13:50:24Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Designing Deep Networks for Scene Recognition [3.493180651702109]
We conduct extensive experiments to demonstrate the widely accepted principles in network design may result in dramatic performance differences when the data is altered.
This paper presents a novel network design methodology: data-oriented network design.
We propose a Deep-Narrow Network and Dilated Pooling module, which improved the scene recognition performance using less than half of the computational resources.
arXiv Detail & Related papers (2023-03-13T18:28:06Z) - Influencer Detection with Dynamic Graph Neural Networks [56.1837101824783]
We investigate different dynamic Graph Neural Networks (GNNs) configurations for influencer detection.
We show that using deep multi-head attention in GNN and encoding temporal attributes significantly improves performance.
arXiv Detail & Related papers (2022-11-15T13:00:25Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Topological Uncertainty: Monitoring trained neural networks through
persistence of activation graphs [0.9786690381850356]
In industrial applications, data coming from an open-world setting might widely differ from the benchmark datasets on which a network was trained.
We develop a method to monitor trained neural networks based on the topological properties of their activation graphs.
arXiv Detail & Related papers (2021-05-07T14:16:03Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Neural networks adapting to datasets: learning network size and topology [77.34726150561087]
We introduce a flexible setup allowing for a neural network to learn both its size and topology during the course of a gradient-based training.
The resulting network has the structure of a graph tailored to the particular learning task and dataset.
arXiv Detail & Related papers (2020-06-22T12:46:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.