Latent Discretization for Continuous-time Sequence Compression
- URL: http://arxiv.org/abs/2212.13659v1
- Date: Wed, 28 Dec 2022 01:15:27 GMT
- Title: Latent Discretization for Continuous-time Sequence Compression
- Authors: Ricky T. Q. Chen, Matthew Le, Matthew Muckley, Maximilian Nickel,
Karen Ullrich
- Abstract summary: In this work, we treat data sequences as observations from an underlying continuous-time process.
We show that our approaches can automatically achieve reductions in bit rates by learning how to discretize.
- Score: 21.062288207034968
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural compression offers a domain-agnostic approach to creating codecs for
lossy or lossless compression via deep generative models. For sequence
compression, however, most deep sequence models have costs that scale with the
sequence length rather than the sequence complexity. In this work, we instead
treat data sequences as observations from an underlying continuous-time process
and learn how to efficiently discretize while retaining information about the
full sequence. As a consequence of decoupling sequential information from its
temporal discretization, our approach allows for greater compression rates and
smaller computational complexity. Moreover, the continuous-time approach
naturally allows us to decode at different time intervals. We empirically
verify our approach on multiple domains involving compression of video and
motion capture sequences, showing that our approaches can automatically achieve
reductions in bit rates by learning how to discretize.
Related papers
- CAMEO: Autocorrelation-Preserving Line Simplification for Lossy Time Series Compression [7.938342455750219]
We propose a new lossy compression method that provides guarantees on the autocorrelation and partial-autocorrelation functions of a time series.
Our method improves compression ratios by 2x on average and up to 54x on selected datasets.
arXiv Detail & Related papers (2025-01-24T11:59:51Z) - Learned Compression of Nonlinear Time Series With Random Access [2.564905016909138]
Time series play a crucial role in many fields, including finance, healthcare, industry, and environmental monitoring.
We introduce NeaTS, a randomly-accessible compression scheme that approximates the time series with a sequence of nonlinear functions.
Our experiments show that NeaTS improves the compression ratio of the state-of-the-art lossy compressors by up to 14%.
arXiv Detail & Related papers (2024-12-20T10:30:06Z) - CItruS: Chunked Instruction-aware State Eviction for Long Sequence Modeling [52.404072802235234]
We introduce Chunked Instruction-aware State Eviction (CItruS), a novel modeling technique that integrates the attention preferences useful for a downstream task into the eviction process of hidden states.
Our training-free method exhibits superior performance on long sequence comprehension and retrieval tasks over several strong baselines under the same memory budget.
arXiv Detail & Related papers (2024-06-17T18:34:58Z) - Uncertainty-Aware Deep Video Compression with Ensembles [24.245365441718654]
We propose an uncertainty-aware video compression model that can effectively capture predictive uncertainty with deep ensembles.
Our model can effectively save bits by more than 20% compared to 1080p sequences.
arXiv Detail & Related papers (2024-03-28T05:44:48Z) - Reinforcement Learning with Simple Sequence Priors [9.869634509510016]
We propose an RL algorithm that learns to solve tasks with sequences of actions that are compressible.
We show that the resulting RL algorithm leads to faster learning, and attains higher returns than state-of-the-art model-free approaches.
arXiv Detail & Related papers (2023-05-26T17:18:14Z) - Once-for-All Sequence Compression for Self-Supervised Speech Models [62.60723685118747]
We introduce a once-for-all sequence compression framework for self-supervised speech models.
The framework is evaluated on various tasks, showing marginal degradation compared to the fixed compressing rate variants.
We also explore adaptive compressing rate learning, demonstrating the ability to select task-specific preferred frame periods without needing a grid search.
arXiv Detail & Related papers (2022-11-04T09:19:13Z) - Unrolled Compressed Blind-Deconvolution [77.88847247301682]
sparse multichannel blind deconvolution (S-MBD) arises frequently in many engineering applications such as radar/sonar/ultrasound imaging.
We propose a compression method that enables blind recovery from much fewer measurements with respect to the full received signal in time.
arXiv Detail & Related papers (2022-09-28T15:16:58Z) - Learning Sequence Representations by Non-local Recurrent Neural Memory [61.65105481899744]
We propose a Non-local Recurrent Neural Memory (NRNM) for supervised sequence representation learning.
Our model is able to capture long-range dependencies and latent high-level features can be distilled by our model.
Our model compares favorably against other state-of-the-art methods specifically designed for each of these sequence applications.
arXiv Detail & Related papers (2022-07-20T07:26:15Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - PowerGossip: Practical Low-Rank Communication Compression in
Decentralized Deep Learning [62.440827696638664]
We introduce a simple algorithm that directly compresses the model differences between neighboring workers.
Inspired by the PowerSGD for centralized deep learning, this algorithm uses power steps to maximize the information transferred per bit.
arXiv Detail & Related papers (2020-08-04T09:14:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.