A Trainable Approach to Zero-delay Smoothing Spline Interpolation
- URL: http://arxiv.org/abs/2203.03776v4
- Date: Sun, 20 Aug 2023 21:27:52 GMT
- Title: A Trainable Approach to Zero-delay Smoothing Spline Interpolation
- Authors: Emilio Ruiz-Moreno, Luis Miguel L\'opez-Ramos, Baltasar
Beferull-Lozano
- Abstract summary: The smooth signal must be reconstructed sequentially as soon as a data sample is available and without having access to subsequent data.
Here, each step yields a piece that ensures a smooth signal reconstruction while minimizing a cost metric.
This paper presents a novel approach to further reduce this cumulative cost on average.
- Score: 5.448070998907116
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The task of reconstructing smooth signals from streamed data in the form of
signal samples arises in various applications. This work addresses such a task
subject to a zero-delay response; that is, the smooth signal must be
reconstructed sequentially as soon as a data sample is available and without
having access to subsequent data. State-of-the-art approaches solve this
problem by interpolating consecutive data samples using splines. Here, each
interpolation step yields a piece that ensures a smooth signal reconstruction
while minimizing a cost metric, typically a weighted sum between the squared
residual and a derivative-based measure of smoothness. As a result, a
zero-delay interpolation is achieved in exchange for an almost certainly higher
cumulative cost as compared to interpolating all data samples together. This
paper presents a novel approach to further reduce this cumulative cost on
average. First, we formulate a zero-delay smoothing spline interpolation
problem from a sequential decision-making perspective, allowing us to model the
future impact of each interpolated piece on the average cumulative cost. Then,
an interpolation method is proposed to exploit the temporal dependencies
between the streamed data samples. Our method is assisted by a recurrent neural
network and accordingly trained to reduce the accumulated cost on average over
a set of example data samples collected from the same signal source generating
the signal to be reconstructed. Finally, we present extensive experimental
results for synthetic and real data showing how our approach outperforms the
abovementioned state-of-the-art.
Related papers
- A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Consistent Signal Reconstruction from Streaming Multivariate Time Series [5.448070998907116]
We formalize for the first time the concept of consistent signal reconstruction from streaming time-series data.
Our method achieves a favorable error-rate decay with the sampling rate compared to a similar but non-consistent reconstruction.
arXiv Detail & Related papers (2023-08-23T22:50:52Z) - Gradient Coding with Iterative Block Leverage Score Sampling [42.21200677508463]
We generalize the leverage score sampling sketch for $ell$-subspace embeddings, to accommodate sampling subsets of the transformed data.
This is then used to derive an approximate coded computing approach for first-order methods.
arXiv Detail & Related papers (2023-08-06T12:22:12Z) - Samplet basis pursuit: Multiresolution scattered data approximation with sparsity constraints [0.0]
We consider scattered data approximation in samplet coordinates with $ell_1$-regularization.
By using the Riesz isometry, we embed samplets into reproducing kernel Hilbert spaces.
We argue that the class of signals that are sparse with respect to the embedded samplet basis is considerably larger than the class of signals that are sparse with respect to the basis of kernel translates.
arXiv Detail & Related papers (2023-06-16T21:20:49Z) - Boosting Fast and High-Quality Speech Synthesis with Linear Diffusion [85.54515118077825]
This paper proposes a linear diffusion model (LinDiff) based on an ordinary differential equation to simultaneously reach fast inference and high sample quality.
To reduce computational complexity, LinDiff employs a patch-based processing approach that partitions the input signal into small patches.
Our model can synthesize speech of a quality comparable to that of autoregressive models with faster synthesis speed.
arXiv Detail & Related papers (2023-06-09T07:02:43Z) - Refining Amortized Posterior Approximations using Gradient-Based Summary
Statistics [0.9176056742068814]
We present an iterative framework to improve the amortized approximations of posterior distributions in the context of inverse problems.
We validate our method in a controlled setting by applying it to a stylized problem, and observe improved posterior approximations with each iteration.
arXiv Detail & Related papers (2023-05-15T15:47:19Z) - Multisample Flow Matching: Straightening Flows with Minibatch Couplings [38.82598694134521]
Simulation-free methods for training continuous-time generative models construct probability paths that go between noise distributions and individual data samples.
We propose Multisample Flow Matching, a more general framework that uses non-trivial couplings between data and noise samples.
We show that our proposed methods improve sample consistency on downsampled ImageNet data sets, and lead to better low-cost sample generation.
arXiv Detail & Related papers (2023-04-28T11:33:08Z) - Towards Sample-Optimal Compressive Phase Retrieval with Sparse and
Generative Priors [59.33977545294148]
We show that $O(k log L)$ samples suffice to guarantee that the signal is close to any vector that minimizes an amplitude-based empirical loss function.
We adapt this result to sparse phase retrieval, and show that $O(s log n)$ samples are sufficient for a similar guarantee when the underlying signal is $s$-sparse and $n$-dimensional.
arXiv Detail & Related papers (2021-06-29T12:49:54Z) - Evaluating representations by the complexity of learning low-loss
predictors [55.94170724668857]
We consider the problem of evaluating representations of data for use in solving a downstream task.
We propose to measure the quality of a representation by the complexity of learning a predictor on top of the representation that achieves low loss on a task of interest.
arXiv Detail & Related papers (2020-09-15T22:06:58Z) - Non-Adaptive Adaptive Sampling on Turnstile Streams [57.619901304728366]
We give the first relative-error algorithms for column subset selection, subspace approximation, projective clustering, and volume on turnstile streams that use space sublinear in $n$.
Our adaptive sampling procedure has a number of applications to various data summarization problems that either improve state-of-the-art or have only been previously studied in the more relaxed row-arrival model.
arXiv Detail & Related papers (2020-04-23T05:00:21Z) - Spatially Adaptive Inference with Stochastic Feature Sampling and
Interpolation [72.40827239394565]
We propose to compute features only at sparsely sampled locations.
We then densely reconstruct the feature map with an efficient procedure.
The presented network is experimentally shown to save substantial computation while maintaining accuracy over a variety of computer vision tasks.
arXiv Detail & Related papers (2020-03-19T15:36:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.