Incorporating Ensemble and Transfer Learning For An End-To-End
Auto-Colorized Image Detection Model
- URL: http://arxiv.org/abs/2309.14478v1
- Date: Mon, 25 Sep 2023 19:22:57 GMT
- Title: Incorporating Ensemble and Transfer Learning For An End-To-End
Auto-Colorized Image Detection Model
- Authors: Ahmed Samir Ragab, Shereen Aly Taie, Howida Youssry Abdelnaby
- Abstract summary: This paper presents a novel approach that combines the advantages of transfer and ensemble learning approaches to help reduce training time and resource requirements.
The proposed model shows promising results, with accuracy ranging from 94.55% to 99.13%.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image colorization is the process of colorizing grayscale images or
recoloring an already-color image. This image manipulation can be used for
grayscale satellite, medical and historical images making them more expressive.
With the help of the increasing computation power of deep learning techniques,
the colorization algorithms results are becoming more realistic in such a way
that human eyes cannot differentiate between natural and colorized images.
However, this poses a potential security concern, as forged or illegally
manipulated images can be used illegally. There is a growing need for effective
detection methods to distinguish between natural color and computer-colorized
images. This paper presents a novel approach that combines the advantages of
transfer and ensemble learning approaches to help reduce training time and
resource requirements while proposing a model to classify natural color and
computer-colorized images. The proposed model uses pre-trained branches VGG16
and Resnet50, along with Mobile Net v2 or Efficientnet feature vectors. The
proposed model showed promising results, with accuracy ranging from 94.55% to
99.13% and very low Half Total Error Rate values. The proposed model
outperformed existing state-of-the-art models regarding classification
performance and generalization capabilities.
Related papers
- PrefPaint: Aligning Image Inpainting Diffusion Model with Human Preference [62.72779589895124]
We make the first attempt to align diffusion models for image inpainting with human aesthetic standards via a reinforcement learning framework.
We train a reward model with a dataset we construct, consisting of nearly 51,000 images annotated with human preferences.
Experiments on inpainting comparison and downstream tasks, such as image extension and 3D reconstruction, demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-10-29T11:49:39Z) - Transforming Color: A Novel Image Colorization Method [8.041659727964305]
This paper introduces a novel method for image colorization that utilizes a color transformer and generative adversarial networks (GANs)
The proposed method integrates a transformer architecture to capture global information and a GAN framework to improve visual quality.
Experimental results show that the proposed network significantly outperforms other state-of-the-art colorization techniques.
arXiv Detail & Related papers (2024-10-07T07:23:42Z) - Automatic Controllable Colorization via Imagination [55.489416987587305]
We propose a framework for automatic colorization that allows for iterative editing and modifications.
By understanding the content within a grayscale image, we utilize a pre-trained image generation model to generate multiple images that contain the same content.
These images serve as references for coloring, mimicking the process of human experts.
arXiv Detail & Related papers (2024-04-08T16:46:07Z) - Improved Diffusion-based Image Colorization via Piggybacked Models [19.807766482434563]
We introduce a colorization model piggybacking on the existing powerful T2I diffusion model.
A diffusion guider is designed to incorporate the pre-trained weights of the latent diffusion model.
A lightness-aware VQVAE will then generate the colorized result with pixel-perfect alignment to the given grayscale image.
arXiv Detail & Related papers (2023-04-21T16:23:24Z) - Exemplar-Based Image Colorization with A Learning Framework [7.793461393970992]
We propose an automatic colorization method with a learning framework.
It decouples the colorization process and learning process so as to generate various color styles for the same gray image.
It achieves comparable performance against the state-of-the-art colorization algorithms.
arXiv Detail & Related papers (2022-09-13T07:15:25Z) - Neural Color Operators for Sequential Image Retouching [62.99812889713773]
We propose a novel image retouching method by modeling the retouching process as performing a sequence of newly introduced trainable neural color operators.
The neural color operator mimics the behavior of traditional color operators and learns pixelwise color transformation while its strength is controlled by a scalar.
Our method consistently achieves the best results compared with SOTA methods in both quantitative measures and visual qualities.
arXiv Detail & Related papers (2022-07-17T05:33:19Z) - Detecting Recolored Image by Spatial Correlation [60.08643417333974]
Image recoloring is an emerging editing technique that can manipulate the color values of an image to give it a new style.
In this paper, we explore a solution from the perspective of the spatial correlation, which exhibits the generic detection capability for both conventional and deep learning-based recoloring.
Our method achieves the state-of-the-art detection accuracy on multiple benchmark datasets and exhibits well generalization for unknown types of recoloring methods.
arXiv Detail & Related papers (2022-04-23T01:54:06Z) - Structure-Preserving Multi-Domain Stain Color Augmentation using
Style-Transfer with Disentangled Representations [0.9051352746190446]
HistAuGAN can simulate a wide variety of realistic histology stain colors, thus making neural networks stain-invariant when applied during training.
Based on a generative adversarial network (GAN) for image-to-image translation, our model disentangles the content of the image, i.e., the morphological tissue structure, from the stain color attributes.
It can be trained on multiple domains and, therefore, learns to cover different stain colors as well as other domain-specific variations introduced in the slide preparation and imaging process.
arXiv Detail & Related papers (2021-07-26T17:52:39Z) - Instance-aware Image Colorization [51.12040118366072]
In this paper, we propose a method for achieving instance-aware colorization.
Our network architecture leverages an off-the-shelf object detector to obtain cropped object images.
We use a similar network to extract the full-image features and apply a fusion module to predict the final colors.
arXiv Detail & Related papers (2020-05-21T17:59:23Z) - Supervised and Unsupervised Learning of Parameterized Color Enhancement [112.88623543850224]
We tackle the problem of color enhancement as an image translation task using both supervised and unsupervised learning.
We achieve state-of-the-art results compared to both supervised (paired data) and unsupervised (unpaired data) image enhancement methods on the MIT-Adobe FiveK benchmark.
We show the generalization capability of our method, by applying it on photos from the early 20th century and to dark video frames.
arXiv Detail & Related papers (2019-12-30T13:57:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.