Adaptive color transfer from images to terrain visualizations
- URL: http://arxiv.org/abs/2205.14908v1
- Date: Mon, 30 May 2022 08:03:30 GMT
- Title: Adaptive color transfer from images to terrain visualizations
- Authors: Mingguang Wu, Yanjie Sun, Shangjing Jiang
- Abstract summary: We present a two-step image-to-terrain color transfer method that can transfer color from arbitrary images to diverse terrain models.
First, we present a new image color organization method that organizes discrete, irregular image colors into a continuous, regular color grid.
We quantify a series of subjective concerns about color crafting, such as "the lower, the higher" principle, color conventions, and aerial perspectives.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Terrain mapping is not only dedicated to communicating how high or how steep
a landscape is but can also help to narrate how we feel about a place. However,
crafting effective and expressive hypsometric tints is challenging for both
nonexperts and experts. In this paper, we present a two-step image-to-terrain
color transfer method that can transfer color from arbitrary images to diverse
terrain models. First, we present a new image color organization method that
organizes discrete, irregular image colors into a continuous, regular color
grid that facilitates a series of color operations, such as local and global
searching, categorical color selection and sequential color interpolation.
Second, we quantify a series of subjective concerns about elevation color
crafting, such as "the lower, the higher" principle, color conventions, and
aerial perspectives. We also define color similarity between image and terrain
visualization with aesthetic quality. We then mathematically formulate
image-to-terrain color transfer as a dual-objective optimization problem and
offer a heuristic searching method to solve the problem. Finally, we compare
elevation tints from our method with a standard color scheme on four test
terrains. The evaluations show that the hypsometric tints from the proposed
method can work as effectively as the standard scheme and that our tints are
more visually favorable. We also showcase that our method can transfer emotion
from image to terrain visualization.
Related papers
- MangaNinja: Line Art Colorization with Precise Reference Following [84.2001766692797]
MangaNinjia specializes in the task of reference-guided line art colorization.
We incorporate two thoughtful designs to ensure precise character detail transcription.
A patch shuffling module to facilitate correspondence learning between the reference color image and the target line art, and a point-driven control scheme to enable fine-grained color matching.
arXiv Detail & Related papers (2025-01-14T18:59:55Z) - Palette-based Color Transfer between Images [9.471264982229508]
We propose a new palette-based color transfer method that can automatically generate a new color scheme.
With a redesigned palette-based clustering method, pixels can be classified into different segments according to color distribution.
Our method exhibits significant advantages over peer methods in terms of natural realism, color consistency, generality, and robustness.
arXiv Detail & Related papers (2024-05-14T01:41:19Z) - Control Color: Multimodal Diffusion-based Interactive Image Colorization [81.68817300796644]
Control Color (Ctrl Color) is a multi-modal colorization method that leverages the pre-trained Stable Diffusion (SD) model.
We present an effective way to encode user strokes to enable precise local color manipulation.
We also introduce a novel module based on self-attention and a content-guided deformable autoencoder to address the long-standing issues of color overflow and inaccurate coloring.
arXiv Detail & Related papers (2024-02-16T17:51:13Z) - CoRF : Colorizing Radiance Fields using Knowledge Distillation [25.714166805323135]
This work presents a method for synthesizing colorized novel views from input grey-scale multi-view images.
We propose a distillation based method to transfer color knowledge from the colorization networks trained on natural images to the radiance field network.
The experimental results demonstrate that the proposed method produces superior colorized novel views for indoor and outdoor scenes.
arXiv Detail & Related papers (2023-09-14T12:30:48Z) - Name Your Colour For the Task: Artificially Discover Colour Naming via
Colour Quantisation Transformer [62.75343115345667]
We propose a novel colour quantisation transformer, CQFormer, that quantises colour space while maintaining machine recognition on the quantised images.
We observe the consistent evolution pattern between our artificial colour system and basic colour terms across human languages.
Our colour quantisation method also offers an efficient quantisation method that effectively compresses the image storage.
arXiv Detail & Related papers (2022-12-07T03:39:18Z) - PalGAN: Image Colorization with Palette Generative Adversarial Networks [51.59276436217957]
We propose a new GAN-based colorization approach PalGAN, integrated with palette estimation and chromatic attention.
PalGAN outperforms state-of-the-arts in quantitative evaluation and visual comparison, delivering notable diverse, contrastive, and edge-preserving appearances.
arXiv Detail & Related papers (2022-10-20T12:28:31Z) - iColoriT: Towards Propagating Local Hint to the Right Region in
Interactive Colorization by Leveraging Vision Transformer [29.426206281291755]
We present iColoriT, a novel point-interactive colorization Vision Transformer capable of propagating user hints to relevant regions.
Our approach colorizes images in real-time by utilizing pixel shuffling, an efficient upsampling technique that replaces the decoder architecture.
arXiv Detail & Related papers (2022-07-14T11:40:32Z) - Semantic-Sparse Colorization Network for Deep Exemplar-based
Colorization [23.301799487207035]
Exemplar-based colorization approaches rely on reference image to provide plausible colors for target gray-scale image.
We propose Semantic-Sparse Colorization Network (SSCN) to transfer both the global image style and semantic-related colors to the gray-scale image.
Our network can perfectly balance the global and local colors while alleviating the ambiguous matching problem.
arXiv Detail & Related papers (2021-12-02T15:35:10Z) - Texture for Colors: Natural Representations of Colors Using Variable
Bit-Depth Textures [13.180922099929765]
We present an automated method to transform an image to a set of binary textures that represent not only the intensities, but also the colors of the original.
The system yields aesthetically pleasing binary images when tested on a variety of image sources.
arXiv Detail & Related papers (2021-05-04T21:22:02Z) - Underwater Image Enhancement via Medium Transmission-Guided Multi-Color
Space Embedding [88.46682991985907]
We present an underwater image enhancement network via medium transmission-guided multi-color space embedding, called Ucolor.
Our network can effectively improve the visual quality of underwater images by exploiting multiple color spaces embedding.
arXiv Detail & Related papers (2021-04-27T07:35:30Z) - Semantic-driven Colorization [78.88814849391352]
Recent colorization works implicitly predict the semantic information while learning to colorize black-and-white images.
In this study, we simulate that human-like action to let our network first learn to understand the photo, then colorize it.
arXiv Detail & Related papers (2020-06-13T08:13:30Z) - Deep Line Art Video Colorization with a Few References [49.7139016311314]
We propose a deep architecture to automatically color line art videos with the same color style as the given reference images.
Our framework consists of a color transform network and a temporal constraint network.
Our model can achieve even better coloring results by fine-tuning the parameters with only a small amount of samples.
arXiv Detail & Related papers (2020-03-24T06:57:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.