Problems of representation of electrocardiograms in convolutional neural
networks
- URL: http://arxiv.org/abs/2012.00493v1
- Date: Tue, 1 Dec 2020 14:02:06 GMT
- Title: Problems of representation of electrocardiograms in convolutional neural
networks
- Authors: Iana Sereda, Sergey Alekseev, Aleksandra Koneva, Alexey Khorkin,
Grigory Osipov
- Abstract summary: We show that these problems are systemic in nature.
They are due to how convolutional networks work with composite objects, parts of which are not fixed rigidly, but have significant mobility.
- Score: 58.720142291102135
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Using electrocardiograms as an example, we demonstrate the characteristic
problems that arise when modeling one-dimensional signals containing inaccurate
repeating pattern by means of standard convolutional networks. We show that
these problems are systemic in nature. They are due to how convolutional
networks work with composite objects, parts of which are not fixed rigidly, but
have significant mobility. We also demonstrate some counterintuitive effects
related to generalization in deep networks.
Related papers
- Deconvolving Complex Neuronal Networks into Interpretable Task-Specific Connectomes [12.762193569830593]
Task-specific functional MRI (fMRI) images provide excellent modalities for studying the neuronal basis of cognitive processes.
We use fMRI data to formulate and solve the problem of deconvolving task-specific aggregate neuronal networks into a set of basic building blocks called canonical networks.
arXiv Detail & Related papers (2024-06-28T19:13:48Z) - Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Generalization and Estimation Error Bounds for Model-based Neural
Networks [78.88759757988761]
We show that the generalization abilities of model-based networks for sparse recovery outperform those of regular ReLU networks.
We derive practical design rules that allow to construct model-based networks with guaranteed high generalization.
arXiv Detail & Related papers (2023-04-19T16:39:44Z) - From Compass and Ruler to Convolution and Nonlinearity: On the
Surprising Difficulty of Understanding a Simple CNN Solving a Simple
Geometric Estimation Task [6.230751621285322]
We propose to address a simple well-posed learning problem using a simple convolutional neural network.
Surprisingly, understanding what trained networks have learned is difficult and, to some extent, counter-intuitive.
arXiv Detail & Related papers (2023-03-12T11:30:49Z) - Neural Network Complexity of Chaos and Turbulence [0.0]
We consider the relative complexity of chaos and turbulence from the perspective of deep neural networks.
We analyze a set of classification problems, where the network has to distinguish images of fluid profiles in the turbulent regime.
We quantify the complexity of the computation performed by the network via the intrinsic dimensionality of the internal feature representations.
arXiv Detail & Related papers (2022-11-24T13:21:36Z) - Entangled Residual Mappings [59.02488598557491]
We introduce entangled residual mappings to generalize the structure of the residual connections.
An entangled residual mapping replaces the identity skip connections with specialized entangled mappings.
We show that while entangled mappings can preserve the iterative refinement of features across various deep models, they influence the representation learning process in convolutional networks.
arXiv Detail & Related papers (2022-06-02T19:36:03Z) - Training Adaptive Reconstruction Networks for Blind Inverse Problems [0.0]
We show through various applications that training the network with a family of forward operators allows solving the adaptivity problem without compromising the reconstruction quality significantly.
Experiments include partial Fourier sampling problems arising in magnetic resonance imaging (MRI) with sensitivity estimation and off-resonance effects, computerized tomography (CT) with a tilted geometry and image deblurring with Fresnel diffraction kernels.
arXiv Detail & Related papers (2022-02-23T07:56:02Z) - Convolutional Filtering and Neural Networks with Non Commutative
Algebras [153.20329791008095]
We study the generalization of non commutative convolutional neural networks.
We show that non commutative convolutional architectures can be stable to deformations on the space of operators.
arXiv Detail & Related papers (2021-08-23T04:22:58Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.