A Deep Learning Approach for Digital ColorReconstruction of Lenticular
Films
- URL: http://arxiv.org/abs/2202.05270v1
- Date: Thu, 10 Feb 2022 11:08:50 GMT
- Title: A Deep Learning Approach for Digital ColorReconstruction of Lenticular
Films
- Authors: Stefano D'Aronco, Giorgio Trumpy, David Pfluger, Jan Dirk Wegner
- Abstract summary: Lenticular films emerged in the 1920s and were one of the first technologies that permitted to capture full color information in motion.
In this work, we introduce an automated, fully digital pipeline to process the scan of lenticular films and colorize the image.
Our method merges deep learning with a model-based approach in order to maximize the performance while making sure that the reconstructed colored images truthfully match the encoded color information.
- Score: 8.264186103325725
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose the first accurate digitization and color reconstruction process
for historical lenticular film that is robust to artifacts. Lenticular films
emerged in the 1920s and were one of the first technologies that permitted to
capture full color information in motion. The technology leverages an RGB
filter and cylindrical lenticules embossed on the film surface to encode the
color in the horizontal spatial dimension of the image. To project the pictures
the encoding process was reversed using an appropriate analog device. In this
work, we introduce an automated, fully digital pipeline to process the scan of
lenticular films and colorize the image. Our method merges deep learning with a
model-based approach in order to maximize the performance while making sure
that the reconstructed colored images truthfully match the encoded color
information. Our model employs different strategies to achieve an effective
color reconstruction, in particular (i) we use data augmentation to create a
robust lenticule segmentation network, (ii) we fit the lenticules raster
prediction to obtain a precise vectorial lenticule localization, and (iii) we
train a colorization network that predicts interpolation coefficients in order
to obtain a truthful colorization. We validate the proposed method on a
lenticular film dataset and compare it to other approaches. Since no colored
groundtruth is available as reference, we conduct a user study to validate our
method in a subjective manner. The results of the study show that the proposed
method is largely preferred with respect to other existing and baseline
methods.
Related papers
- Incorporating Ensemble and Transfer Learning For An End-To-End
Auto-Colorized Image Detection Model [0.0]
This paper presents a novel approach that combines the advantages of transfer and ensemble learning approaches to help reduce training time and resource requirements.
The proposed model shows promising results, with accuracy ranging from 94.55% to 99.13%.
arXiv Detail & Related papers (2023-09-25T19:22:57Z) - CoRF : Colorizing Radiance Fields using Knowledge Distillation [25.714166805323135]
This work presents a method for synthesizing colorized novel views from input grey-scale multi-view images.
We propose a distillation based method to transfer color knowledge from the colorization networks trained on natural images to the radiance field network.
The experimental results demonstrate that the proposed method produces superior colorized novel views for indoor and outdoor scenes.
arXiv Detail & Related papers (2023-09-14T12:30:48Z) - Color Learning for Image Compression [1.2330326247154968]
We propose a novel deep learning model architecture, where the task of image compression is divided into two sub-tasks.
The model has two separate branches to process the luminance and chrominance components.
We demonstrate the benefits of our approach and compare the performance to other codecs.
arXiv Detail & Related papers (2023-06-30T08:16:48Z) - Improving Video Colorization by Test-Time Tuning [79.67548221384202]
We propose an effective method, which aims to enhance video colorization through test-time tuning.
By exploiting the reference to construct additional training samples during testing, our approach achieves a performance boost of 13 dB in PSNR on average.
arXiv Detail & Related papers (2023-06-25T05:36:40Z) - Video Colorization with Pre-trained Text-to-Image Diffusion Models [19.807766482434563]
We present ColorDiffuser, an adaptation of a pre-trained text-to-image latent diffusion model for video colorization.
We propose two novel techniques to enhance the temporal coherence and maintain the vividness of colorization across frames.
arXiv Detail & Related papers (2023-06-02T17:58:00Z) - BiSTNet: Semantic Image Prior Guided Bidirectional Temporal Feature
Fusion for Deep Exemplar-based Video Colorization [70.14893481468525]
We present an effective BiSTNet to explore colors of reference exemplars and utilize them to help video colorization.
We first establish the semantic correspondence between each frame and the reference exemplars in deep feature space to explore color information from reference exemplars.
We develop a mixed expert block to extract semantic information for modeling the object boundaries of frames so that the semantic image prior can better guide the colorization process.
arXiv Detail & Related papers (2022-12-05T13:47:15Z) - Cross-Camera Deep Colorization [10.254243409261898]
We propose an end-to-end convolutional neural network to align and fuse images from a color-plus-mono dual-camera system.
Our method consistently achieves substantial improvements, i.e., around 10dB PSNR gain.
arXiv Detail & Related papers (2022-08-26T11:02:14Z) - Detecting Recolored Image by Spatial Correlation [60.08643417333974]
Image recoloring is an emerging editing technique that can manipulate the color values of an image to give it a new style.
In this paper, we explore a solution from the perspective of the spatial correlation, which exhibits the generic detection capability for both conventional and deep learning-based recoloring.
Our method achieves the state-of-the-art detection accuracy on multiple benchmark datasets and exhibits well generalization for unknown types of recoloring methods.
arXiv Detail & Related papers (2022-04-23T01:54:06Z) - Image Colorization: A Survey and Dataset [78.89573261114428]
This article presents a comprehensive survey of state-of-the-art deep learning-based image colorization techniques.
It categorizes the existing colorization techniques into seven classes and discusses important factors governing their performance.
Using the existing datasets and our new one, we perform an extensive experimental evaluation of existing image colorization methods.
arXiv Detail & Related papers (2020-08-25T01:22:52Z) - Instance-aware Image Colorization [51.12040118366072]
In this paper, we propose a method for achieving instance-aware colorization.
Our network architecture leverages an off-the-shelf object detector to obtain cropped object images.
We use a similar network to extract the full-image features and apply a fusion module to predict the final colors.
arXiv Detail & Related papers (2020-05-21T17:59:23Z) - Deep Line Art Video Colorization with a Few References [49.7139016311314]
We propose a deep architecture to automatically color line art videos with the same color style as the given reference images.
Our framework consists of a color transform network and a temporal constraint network.
Our model can achieve even better coloring results by fine-tuning the parameters with only a small amount of samples.
arXiv Detail & Related papers (2020-03-24T06:57:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.