Style Transfer for Light Field Photography
- URL: http://arxiv.org/abs/2002.11220v1
- Date: Tue, 25 Feb 2020 23:21:47 GMT
- Title: Style Transfer for Light Field Photography
- Authors: David Hart, Jessica Greenland, Bryan Morse
- Abstract summary: It is necessary to adapt existing monocular style transfer networks in a way that allows for the stylization of each view of the light field.
The proposed method backpropagates the loss through the network, and the process is iterated to optimize the resulting stylization for a single light field image alone.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As light field images continue to increase in use and application, it becomes
necessary to adapt existing image processing methods to this unique form of
photography. In this paper we explore methods for applying neural style
transfer to light field images. Feed-forward style transfer networks provide
fast, high-quality results for monocular images, but no such networks exist for
full light field images. Because of the size of these images, current light
field data sets are small and are insufficient for training purely feed-forward
style-transfer networks from scratch. Thus, it is necessary to adapt existing
monocular style transfer networks in a way that allows for the stylization of
each view of the light field while maintaining visual consistencies between
views. Instead, the proposed method backpropagates the loss through the
network, and the process is iterated to optimize (essentially overfit) the
resulting stylization for a single light field image alone. The network
architecture allows for the incorporation of pre-trained fast monocular
stylization networks while avoiding the need for a large light field training
set.
Related papers
- Instant Photorealistic Style Transfer: A Lightweight and Adaptive
Approach [19.80952739530828]
We propose an Instant Photo Style Transfer (IPST) approach to achieve instant photorealistic style transfer on super-resolution inputs.
Our method utilizes a lightweight StyleNet to enable style transfer from a style image to a content image while preserving non-color information.
IPST is well-suited for multi-frame style transfer tasks, as it retains temporal and multi-view consistency of the multi-frame inputs.
arXiv Detail & Related papers (2023-09-18T07:04:40Z) - Learning to Relight Portrait Images via a Virtual Light Stage and
Synthetic-to-Real Adaptation [76.96499178502759]
Relighting aims to re-illuminate the person in the image as if the person appeared in an environment with the target lighting.
Recent methods rely on deep learning to achieve high-quality results.
We propose a new approach that can perform on par with the state-of-the-art (SOTA) relighting methods without requiring a light stage.
arXiv Detail & Related papers (2022-09-21T17:15:58Z) - DPFNet: A Dual-branch Dilated Network with Phase-aware Fourier
Convolution for Low-light Image Enhancement [1.2645663389012574]
Low-light image enhancement is a classical computer vision problem aiming to recover normal-exposure images from low-light images.
convolutional neural networks commonly used in this field are good at sampling low-frequency local structural features in the spatial domain.
We propose a novel module using the Fourier coefficients, which can recover high-quality texture details under the constraint of semantics in the frequency phase.
arXiv Detail & Related papers (2022-09-16T13:56:09Z) - Progressively-connected Light Field Network for Efficient View Synthesis [69.29043048775802]
We present a Progressively-connected Light Field network (ProLiF) for the novel view synthesis of complex forward-facing scenes.
ProLiF encodes a 4D light field, which allows rendering a large batch of rays in one training step for image- or patch-level losses.
arXiv Detail & Related papers (2022-07-10T13:47:20Z) - Enhancing Low-Light Images in Real World via Cross-Image Disentanglement [58.754943762945864]
We propose a new low-light image enhancement dataset consisting of misaligned training images with real-world corruptions.
Our model achieves state-of-the-art performances on both the newly proposed dataset and other popular low-light datasets.
arXiv Detail & Related papers (2022-01-10T03:12:52Z) - Deep Translation Prior: Test-time Training for Photorealistic Style
Transfer [36.82737412912885]
Recent techniques to solve photorealistic style transfer within deep convolutional neural networks (CNNs) generally require intensive training from large-scale datasets.
We propose a novel framework, dubbed Deep Translation Prior (DTP), to accomplish photorealistic style transfer through test-time training on given input image pair with untrained networks.
arXiv Detail & Related papers (2021-12-12T04:54:27Z) - UMFA: A photorealistic style transfer method based on U-Net and
multi-layer feature aggregation [0.0]
We propose a photorealistic style transfer network to emphasize the natural effect of photorealistic image stylization.
In particular, an encoder based on the dense block and a decoder form a symmetrical structure of U-Net are jointly staked to realize an effective feature extraction and image reconstruction.
arXiv Detail & Related papers (2021-08-13T08:06:29Z) - Learning optical flow from still images [53.295332513139925]
We introduce a framework to generate accurate ground-truth optical flow annotations quickly and in large amounts from any readily available single real picture.
We virtually move the camera in the reconstructed environment with known motion vectors and rotation angles.
When trained with our data, state-of-the-art optical flow networks achieve superior generalization to unseen real data.
arXiv Detail & Related papers (2021-04-08T17:59:58Z) - Light Stage Super-Resolution: Continuous High-Frequency Relighting [58.09243542908402]
We propose a learning-based solution for the "super-resolution" of scans of human faces taken from a light stage.
Our method aggregates the captured images corresponding to neighboring lights in the stage, and uses a neural network to synthesize a rendering of the face.
Our learned model is able to produce renderings for arbitrary light directions that exhibit realistic shadows and specular highlights.
arXiv Detail & Related papers (2020-10-17T23:40:43Z) - Photon-Driven Neural Path Guiding [102.12596782286607]
We present a novel neural path guiding approach that can reconstruct high-quality sampling distributions for path guiding from a sparse set of samples.
We leverage photons traced from light sources as the input for sampling density reconstruction, which is highly effective for challenging scenes with strong global illumination.
Our approach achieves significantly better rendering results of testing scenes than previous state-of-the-art path guiding methods.
arXiv Detail & Related papers (2020-10-05T04:54:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.