A Deep Learning-based Compression and Classification Technique for Whole
Slide Histopathology Images
- URL: http://arxiv.org/abs/2305.07161v1
- Date: Thu, 11 May 2023 22:20:05 GMT
- Title: A Deep Learning-based Compression and Classification Technique for Whole
Slide Histopathology Images
- Authors: Agnes Barsi, Suvendu Chandan Nayak, Sasmita Parida, Raj Mani Shukla
- Abstract summary: We build an ensemble of neural networks that enables a compressive autoencoder in a supervised fashion to retain a denser and more meaningful representation of the input histology images.
We test the compressed images using transfer learning-based classifiers and show that they provide promising accuracy and classification performance.
- Score: 0.31498833540989407
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents an autoencoder-based neural network architecture to
compress histopathological images while retaining the denser and more
meaningful representation of the original images. Current research into
improving compression algorithms is focused on methods allowing lower
compression rates for Regions of Interest (ROI-based approaches). Neural
networks are great at extracting meaningful semantic representations from
images, therefore are able to select the regions to be considered of interest
for the compression process. In this work, we focus on the compression of whole
slide histopathology images. The objective is to build an ensemble of neural
networks that enables a compressive autoencoder in a supervised fashion to
retain a denser and more meaningful representation of the input histology
images. Our proposed system is a simple and novel method to supervise
compressive neural networks. We test the compressed images using transfer
learning-based classifiers and show that they provide promising accuracy and
classification performance.
Related papers
- Recompression Based JPEG Tamper Detection and Localization Using Deep Neural Network Eliminating Compression Factor Dependency [2.8498944632323755]
We propose a Convolution Neural Network based deep learning architecture, which is capable of detecting the presence of re compression based forgery in JPEG images.
In this work, we also aim to localize the regions of image manipulation based on re compression features, using the trained neural network.
arXiv Detail & Related papers (2024-07-03T09:19:35Z) - Slicer Networks [8.43960865813102]
We propose the Slicer Network, a novel architecture for medical image analysis.
The Slicer Network strategically refines and upsamples feature maps via a splatting-blurring-slicing process.
Experiments across different medical imaging applications have verified the Slicer Network's improved accuracy and efficiency.
arXiv Detail & Related papers (2024-01-18T09:50:26Z) - Image Data Hiding in Neural Compressed Latent Representations [1.0878040851638]
We propose an end-to-end learned image data hiding framework that embeds and extracts secrets in the latent representations of a generic neural compressor.
Compared to existing techniques, our framework offers superior image secrecy and competitive watermarking in the compressed domain.
arXiv Detail & Related papers (2023-10-01T03:53:28Z) - The Devil Is in the Details: Window-based Attention for Image
Compression [58.1577742463617]
Most existing learned image compression models are based on Convolutional Neural Networks (CNNs)
In this paper, we study the effects of multiple kinds of attention mechanisms for local features learning, then introduce a more straightforward yet effective window-based local attention block.
The proposed window-based attention is very flexible which could work as a plug-and-play component to enhance CNN and Transformer models.
arXiv Detail & Related papers (2022-03-16T07:55:49Z) - COIN++: Data Agnostic Neural Compression [55.27113889737545]
COIN++ is a neural compression framework that seamlessly handles a wide range of data modalities.
We demonstrate the effectiveness of our method by compressing various data modalities.
arXiv Detail & Related papers (2022-01-30T20:12:04Z) - Implicit Neural Representations for Image Compression [103.78615661013623]
Implicit Neural Representations (INRs) have gained attention as a novel and effective representation for various data types.
We propose the first comprehensive compression pipeline based on INRs including quantization, quantization-aware retraining and entropy coding.
We find that our approach to source compression with INRs vastly outperforms similar prior work.
arXiv Detail & Related papers (2021-12-08T13:02:53Z) - Sharp-GAN: Sharpness Loss Regularized GAN for Histopathology Image
Synthesis [65.47507533905188]
Conditional generative adversarial networks have been applied to generate synthetic histopathology images.
We propose a sharpness loss regularized generative adversarial network to synthesize realistic histopathology images.
arXiv Detail & Related papers (2021-10-27T18:54:25Z) - HistoTransfer: Understanding Transfer Learning for Histopathology [9.231495418218813]
We compare the performance of features extracted from networks trained on ImageNet and histopathology data.
We investigate if features learned using more complex networks lead to gain in performance.
arXiv Detail & Related papers (2021-06-13T18:55:23Z) - NAS-DIP: Learning Deep Image Prior with Neural Architecture Search [65.79109790446257]
Recent work has shown that the structure of deep convolutional neural networks can be used as a structured image prior.
We propose to search for neural architectures that capture stronger image priors.
We search for an improved network by leveraging an existing neural architecture search algorithm.
arXiv Detail & Related papers (2020-08-26T17:59:36Z) - A new approach to descriptors generation for image retrieval by
analyzing activations of deep neural network layers [43.77224853200986]
We consider the problem of descriptors construction for the task of content-based image retrieval using deep neural networks.
It is known that the total number of neurons in the convolutional part of the network is large and the majority of them have little influence on the final classification decision.
We propose a novel algorithm that allows us to extract the most significant neuron activations and utilize this information to construct effective descriptors.
arXiv Detail & Related papers (2020-07-13T18:53:10Z) - Neural Sparse Representation for Image Restoration [116.72107034624344]
Inspired by the robustness and efficiency of sparse coding based image restoration models, we investigate the sparsity of neurons in deep networks.
Our method structurally enforces sparsity constraints upon hidden neurons.
Experiments show that sparse representation is crucial in deep neural networks for multiple image restoration tasks.
arXiv Detail & Related papers (2020-06-08T05:15:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.