Context-Aware Neural Video Compression on Solar Dynamics Observatory
- URL: http://arxiv.org/abs/2309.10784v1
- Date: Tue, 19 Sep 2023 17:33:12 GMT
- Title: Context-Aware Neural Video Compression on Solar Dynamics Observatory
- Authors: Atefeh Khoshkhahtinat, Ali Zafari, Piyush M. Mehta, Nasser M.
Nasrabadi, Barbara J. Thompson, Michael S. F. Kirk, Daniel da Silva
- Abstract summary: NASA's Solar Dynamics Observatory (SDO) mission collects large data volumes of the Sun's daily activity.
Data compression is crucial for space missions to reduce data storage and video bandwidth requirements.
We present a novel neural Transformer-based video compression approach specifically designed for the SDO images.
- Score: 9.173243793862317
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: NASA's Solar Dynamics Observatory (SDO) mission collects large data volumes
of the Sun's daily activity. Data compression is crucial for space missions to
reduce data storage and video bandwidth requirements by eliminating
redundancies in the data. In this paper, we present a novel neural
Transformer-based video compression approach specifically designed for the SDO
images. Our primary objective is to efficiently exploit the temporal and
spatial redundancies inherent in solar images to obtain a high compression
ratio. Our proposed architecture benefits from a novel Transformer block called
Fused Local-aware Window (FLaWin), which incorporates window-based
self-attention modules and an efficient fused local-aware feed-forward (FLaFF)
network. This architectural design allows us to simultaneously capture
short-range and long-range information while facilitating the extraction of
rich and diverse contextual representations. Moreover, this design choice
results in reduced computational complexity. Experimental results demonstrate
the significant contribution of the FLaWin Transformer block to the compression
performance, outperforming conventional hand-engineered video codecs such as
H.264 and H.265 in terms of rate-distortion trade-off.
Related papers
- Bi-Level Spatial and Channel-aware Transformer for Learned Image Compression [0.0]
We propose a novel Transformer-based image compression method that enhances the transformation stage by considering frequency components within the feature map.
Our method integrates a novel Hybrid Spatial-Channel Attention Transformer Block (HSCATB), where a spatial-based branch independently handles high and low frequencies.
We also introduce a Mixed Local-Global Feed Forward Network (MLGFFN) within the Transformer block to enhance the extraction of diverse and rich information.
arXiv Detail & Related papers (2024-08-07T15:35:25Z) - Neural-based Video Compression on Solar Dynamics Observatory Images [8.73521037463594]
NASA's Solar Dynamics Observatory (SDO) mission collects extensive data to monitor the Sun's daily activity.
Data compression plays a crucial role in addressing the challenges posed by limited telemetry rates.
This paper introduces a neural video compression technique that achieves a high compression ratio for the SDO's image data collection.
arXiv Detail & Related papers (2024-07-12T21:24:25Z) - Convolutional variational autoencoders for secure lossy image compression in remote sensing [47.75904906342974]
This study investigates image compression based on convolutional variational autoencoders (CVAE)
CVAEs have been demonstrated to outperform conventional compression methods such as JPEG2000 by a substantial margin on compression benchmark datasets.
arXiv Detail & Related papers (2024-04-03T15:17:29Z) - Binarized Low-light Raw Video Enhancement [49.65466843856074]
Deep neural networks have achieved excellent performance on low-light raw video enhancement.
In this paper, we explore the feasibility of applying the extremely compact binary neural network (BNN) to low-light raw video enhancement.
arXiv Detail & Related papers (2024-03-29T02:55:07Z) - Neural-based Compression Scheme for Solar Image Data [8.374518151411612]
We propose a neural network-based lossy compression method to be used in NASA's data-intensive imagery missions.
In this work, we propose an adversarially trained neural network, equipped with local and non-local attention modules to capture both the local and global structure of the image.
As a proof of concept for use of this algorithm in SDO data analysis, we have performed coronal hole (CH) detection using our compressed images.
arXiv Detail & Related papers (2023-11-06T04:13:58Z) - Attention-Based Generative Neural Image Compression on Solar Dynamics
Observatory [12.283978726972752]
NASA's Solar Dynamics Observatory (SDO) mission gathers 1.4 terabytes of data each day from its geosynchronous orbit in space.
Recently, end-to-end optimized artificial neural networks (ANN) have shown great potential in performing image compression.
We have designed an ad-hoc ANN-based image compression scheme to reduce the amount of data needed to be stored and retrieved on space missions.
arXiv Detail & Related papers (2022-10-12T17:39:08Z) - Exploring Long- and Short-Range Temporal Information for Learned Video
Compression [54.91301930491466]
We focus on exploiting the unique characteristics of video content and exploring temporal information to enhance compression performance.
For long-range temporal information exploitation, we propose temporal prior that can update continuously within the group of pictures (GOP) during inference.
In that case temporal prior contains valuable temporal information of all decoded images within the current GOP.
In detail, we design a hierarchical structure to achieve multi-scale compensation.
arXiv Detail & Related papers (2022-08-07T15:57:18Z) - Learned Video Compression via Heterogeneous Deformable Compensation
Network [78.72508633457392]
We propose a learned video compression framework via heterogeneous deformable compensation strategy (HDCVC) to tackle the problems of unstable compression performance.
More specifically, the proposed algorithm extracts features from the two adjacent frames to estimate content-Neighborhood heterogeneous deformable (HetDeform) kernel offsets.
Experimental results indicate that HDCVC achieves superior performance than the recent state-of-the-art learned video compression approaches.
arXiv Detail & Related papers (2022-07-11T02:31:31Z) - Conditional Entropy Coding for Efficient Video Compression [82.35389813794372]
We propose a very simple and efficient video compression framework that only focuses on modeling the conditional entropy between frames.
We first show that a simple architecture modeling the entropy between the image latent codes is as competitive as other neural video compression works and video codecs.
We then propose a novel internal learning extension on top of this architecture that brings an additional 10% savings without trading off decoding speed.
arXiv Detail & Related papers (2020-08-20T20:01:59Z) - Modeling Lost Information in Lossy Image Compression [72.69327382643549]
Lossy image compression is one of the most commonly used operators for digital images.
We propose a novel invertible framework called Invertible Lossy Compression (ILC) to largely mitigate the information loss problem.
arXiv Detail & Related papers (2020-06-22T04:04:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.