Hyperspectral Image Compression Using Implicit Neural Representation
- URL: http://arxiv.org/abs/2302.04129v2
- Date: Thu, 9 Feb 2023 03:51:20 GMT
- Title: Hyperspectral Image Compression Using Implicit Neural Representation
- Authors: Shima Rezasoltani, Faisal Z. Qureshi
- Abstract summary: This paper develops a method for hyperspectral image compression using implicit neural representations.
We show the proposed method achieves better compression than JPEG, JPEG2000, and PCA-DCT at lows.
- Score: 1.4721615285883425
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Hyperspectral images, which record the electromagnetic spectrum for a pixel
in the image of a scene, often store hundreds of channels per pixel and contain
an order of magnitude more information than a typical similarly-sized color
image. Consequently, concomitant with the decreasing cost of capturing these
images, there is a need to develop efficient techniques for storing,
transmitting, and analyzing hyperspectral images. This paper develops a method
for hyperspectral image compression using implicit neural representations where
a multilayer perceptron network $\Phi_\theta$ with sinusoidal activation
functions ``learns'' to map pixel locations to pixel intensities for a given
hyperspectral image $I$. $\Phi_\theta$ thus acts as a compressed encoding of
this image. The original image is reconstructed by evaluating $\Phi_\theta$ at
each pixel location. We have evaluated our method on four benchmarks -- Indian
Pines, Cuprite, Pavia University, and Jasper Ridge -- and we show the proposed
method achieves better compression than JPEG, JPEG2000, and PCA-DCT at low
bitrates.
Related papers
- Hyperspectral Image Compression Using Sampling and Implicit Neural
Representations [2.3931689873603603]
Hyperspectral images record the electromagnetic spectrum for a pixel in the image of a scene.
With the decreasing cost of capturing these images, there is a need to develop efficient techniques for storing, transmitting, and analyzing hyperspectral images.
This paper develops a method for hyperspectral image compression using implicit neural representations.
arXiv Detail & Related papers (2023-12-04T01:10:04Z) - CompaCT: Fractal-Based Heuristic Pixel Segmentation for Lossless
Compression of High-Color DICOM Medical Images [0.0]
Medical images require a high color depth of 12 bits per pixel component for accurate analysis by physicians.
Standard-based compression of images via filtering is well-known; however, it remains suboptimal in the medical domain due to non-specialized implementations.
This study proposes a medical image compression algorithm, CompaCT, that aims to target spatial features and patterns of pixel concentration for dynamically enhanced data processing.
arXiv Detail & Related papers (2023-08-24T21:43:04Z) - You Can Mask More For Extremely Low-Bitrate Image Compression [80.7692466922499]
Learned image compression (LIC) methods have experienced significant progress during recent years.
LIC methods fail to explicitly explore the image structure and texture components crucial for image compression.
We present DA-Mask that samples visible patches based on the structure and texture of original images.
We propose a simple yet effective masked compression model (MCM), the first framework that unifies LIC and LIC end-to-end for extremely low-bitrate compression.
arXiv Detail & Related papers (2023-06-27T15:36:22Z) - Beyond Learned Metadata-based Raw Image Reconstruction [86.1667769209103]
Raw images have distinct advantages over sRGB images, e.g., linearity and fine-grained quantization levels.
They are not widely adopted by general users due to their substantial storage requirements.
We propose a novel framework that learns a compact representation in the latent space, serving as metadata.
arXiv Detail & Related papers (2023-06-21T06:59:07Z) - CoordFill: Efficient High-Resolution Image Inpainting via Parameterized
Coordinate Querying [52.91778151771145]
In this paper, we try to break the limitations for the first time thanks to the recent development of continuous implicit representation.
Experiments show that the proposed method achieves real-time performance on the 2048$times$2048 images using a single GTX 2080 Ti GPU.
arXiv Detail & Related papers (2023-03-15T11:13:51Z) - Raw Image Reconstruction with Learned Compact Metadata [61.62454853089346]
We propose a novel framework to learn a compact representation in the latent space serving as the metadata in an end-to-end manner.
We show how the proposed raw image compression scheme can adaptively allocate more bits to image regions that are important from a global perspective.
arXiv Detail & Related papers (2023-02-25T05:29:45Z) - Spiking sampling network for image sparse representation and dynamic
vision sensor data compression [0.0]
Sparse representation has attracted great attention because it can greatly save storage re- sources and find representative features of data in a low-dimensional space.
In this paper, we propose a spiking sampling network.
This network is composed of spiking neurons, and it can dynamically decide which pixel points should be retained and which ones need to be masked according to the input.
arXiv Detail & Related papers (2022-11-08T11:11:10Z) - COIN: COmpression with Implicit Neural representations [64.02694714768691]
We propose a new simple approach for image compression.
Instead of storing the RGB values for each pixel of an image, we store the weights of a neural network overfitted to the image.
arXiv Detail & Related papers (2021-03-03T10:58:39Z) - Discernible Image Compression [124.08063151879173]
This paper aims to produce compressed images by pursuing both appearance and perceptual consistency.
Based on the encoder-decoder framework, we propose using a pre-trained CNN to extract features of the original and compressed images.
Experiments on benchmarks demonstrate that images compressed by using the proposed method can also be well recognized by subsequent visual recognition and detection models.
arXiv Detail & Related papers (2020-02-17T07:35:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.